POPULARITY
TLDR - I'm starting a newsletter to reach more listeners and avoid the social algorithms. https://substack.com/@cccdundee A quick update on where things are at with the podcast and what's coming next. This season's felt a little out of rhythm — not because of the guests (they've been brilliant) — but more because of what's been going on behind the scenes. In this episode, I chat about why things have felt a bit off, how changes in social media have made it harder to share episodes, and why I've decided to start a newsletter on Substack. It's a way to cut through the noise and make sure you don't miss new episodes — plus I'll be sharing a bit more insight and digging into some of the back catalogue too. SIgn up now: https://substack.com/@cccdundee
Send us a textToday buffet episode brought to you by the late spring that has FINALLY arrived!What you'll hear:Question from a listener: I was wondering if I could request a topic... I have been recently feeling a bit down due to a break-up with my aerial duo partner/best friend. I know this situation is probably pretty common but I am finding it hard to move on even though I know it is probably for the best.TLDR: I have no idea. I'm so sorry. 1:45Storytime from my own life about friend break ups 2:00Sometimes the end isn't the end (sometimes it is) 3:50Healing isn't attaining something, it's being free of something 6:00Question from a listener: Podcast idea: how to navigate a show when your director gives you no constructive feedback, no direction, and has bad taste. Asking for a friend. 6:32Mount Stupid on the Dunning Kruger Curve: 7:50Not every project is going to be awesome, which is a side effect of being a professional 9:15It is a role of director to solve problems in the pursuit of the vision 10:30Question from a listener: Yesterday I had a chat with a student of mine who is over 60 about how /where she could perform, I know she is not the only one who found aerial later than what is normally considered for gigs and shows. So I thought I'd suggest the topic of alternative route la to performing for those “aged out” or other circumstances. 12:30My turn to ask for something...does anyone know a physical performer in their 60's who had found access to stages? 13:08Any questions you have about the intersection of art and activism, please do send them my way so I can ask an upcoming guest, Dr. Jess Allen! 14:20Long luxurious Patreon update!Don't go back to sleep.xoRachelSign up here for monthly blasts and functional wooFind me on InstagramSupport this podcast on Patreon
TLDR: I've recently started as a “Research Fellow” at Forethought (focusing on how we should prepare for a potential period of explosive growth and related questions). I left my role on the CEA Online Team, but I still love the Forum (and the Forum/CEA/mod teams) and plan on continuing to be quite active here. I'm also staying on the moderation team as an advisor. ➡️ If you were planning on reaching out to me about something Forum- or Online-related, you should probably reach out to Toby Tremlett or email forum@effectivealtruism.org. What's in this post? I had some trouble writing this announcement; I felt like I should post something, but didn't know what to include or how to organize the post. In the end, I decided to write down and share assorted reflections on my time at CEA, and not really worry about putting everything into a cohesive frame or [...] ---Outline:(00:44) What's in this post?(02:17) Briefly: more context on the change(03:45) A note on EA and CEA(04:32) Assorted notes from my time at CEA(04:37) Some things about working at CEA that I probably wouldn't have predicted(04:44) 1. Working with a manager and working in a team have been some of the best ways for me to grow.(05:33) 2. I like CEA's team values and principles a lot more than I expected to. (And I want to import many of them wherever I go.)(08:39) 3. A huge number of people I worked and interacted with are incredibly generous and compassionate, and this makes a big difference.(10:40) Some things about my work at CEA that were difficult for me(10:46) 1. My work was pretty public. This has some benefits, and also some real downsides.(12:31) 2. Many people seem confused about what CEA does, and seemed to assume incorrect things about me because I was a CEA staff member.(14:58) 3. My job involved working on or maintaining many different projects, which made it difficult for me to focus on any single thing or make progress on proactive projects.(16:03) 4. Despite taking little of my time, moderation was quite draining for me.(18:26) Looking back on my work(23:08) Thank you!The original text contained 11 footnotes which were omitted from this narration. --- First published: October 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/SPZv8ygwSPtkzo7ta/announcing-my-departure-from-cea-and-sharing-assorted-notes --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...
TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder.The Future of Humanity Institute is dead:I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its [...]The original text contained 1 footnote which was omitted from this narration. --- First published: April 18th, 2024 Source: https://www.lesswrong.com/posts/ydheLNeWzgbco2FTb/express-interest-in-an-fhi-of-the-west --- Narrated by TYPE III AUDIO.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.TLDR: I've collected some tips for research that I've given to other people and/or used myself, which have sped things up and helped put people in the right general mindset for empirical AI alignment research. Some of these are opinionated takes, also around what has helped me. Researchers can be successful in different ways, but I still stand by the tips here as a reasonable default. What success generally looks likeHere, I've included specific criteria that strong collaborators of mine tend to meet, with rough weightings on the importance, as a rough north star for people who collaborate with me (especially if you're new to research). These criteria are for the specific kind of research I do (highly experimental LLM alignment research, excluding interpretability); some examples of research areas where this applies are e.g. scalable oversight [...]--- First published: February 29th, 2024 Source: https://www.lesswrong.com/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for Empirical Alignment Research, published by Ethan Perez on February 29, 2024 on The AI Alignment Forum. TLDR: I've collected some tips for research that I've given to other people and/or used myself, which have sped things up and helped put people in the right general mindset for empirical AI alignment research. Some of these are opinionated takes, also around what has helped me. Researchers can be successful in different ways, but I still stand by the tips here as a reasonable default. What success generally looks like Here, I've included specific criteria that strong collaborators of mine tend to meet, with rough weightings on the importance, as a rough north star for people who collaborate with me (especially if you're new to research). These criteria are for the specific kind of research I do (highly experimental LLM alignment research, excluding interpretability); some examples of research areas where this applies are e.g. scalable oversight, adversarial robustness, chain-of-thought faithfulness, process-based oversight, and model organisms of misalignment. The exact weighting will also vary heavily depending on what role you're serving on the team/project. E.g., I'd probably upweight criteria where you're differentially strong or differentially contributing on the team, since I generally guide people towards working on things that line up with their skills. For more junior collaborators (e.g., first time doing a research project, where I've scoped out the project), this means I generally weigh execution-focused criteria more than direction-setting criteria (since here I'm often the person doing the direction setting). Also, some of the criteria as outlined below are a really high bar, and e.g. I only recently started to meet them myself after 5 years of doing research and/or I don't meet other criteria myself. This is mainly written to be a north star for targets to aim for. That said, I think most people can get to a good-to-great spot on these criteria with 6-18 months of trying, and I don't currently think that many of these criteria are particularly talent/brains bottlenecked vs. just doing a lot of deliberate practice and working to get better on these criteria (I was actively bad at some of the criteria below like implementation speed even ~6 months into me doing research, but improved a lot since then with practice). With that context, here are the rough success criteria I'd outline: [70%] Getting ideas to work quickly [45%] Implementation speed Able to quickly implement a well-scoped idea. An example of doing really well here is if we talk about an idea one day and decide it's exciting/worth doing, and you tell me the next day whether it worked Able to run a high volume of experiments. You're doing really well here if it's hard for your supervisor to keep up with the volume of the experiments/results you're showing; 30m or even 60m weekly 1:1 meetings should feel like not long enough to discuss all of the results you have, and you have to filter what we discuss in our weekly meetings to just the most important and decision-relevant results. If some experiments take a while to run, you're running a lot of other project-relevant experiments in parallel or implementing the next experiment (Exceptions: the experiments you're running take more than overnight/18h to run and there's no way to design them to be shorter; or the experiments are very implementation-heavy) Able to design a minimal experiment to test a mid/high-level idea. You run experiments in a way such that you're rarely compute or experiment-time bottlenecked (especially early in a project), and your experiments are designed to be easy/quick to implement You trade off code quality and implementation speed in the best way for long-run productivity. You bias heavily towards speed in gener...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry, published by Silas Strawn on January 3, 2024 on The Effective Altruism Forum. Note: I'm not a doctor. Please don't make decisions about your health based on an EA forum post before at least talking with a physician or other licensed healthcare practitioner. TLDR: I donated blood stem cells in early 2021. Immediately prior, I had been identified as the best match for someone in need of a bone marrow transplant, likely with leukemia, lymphoma, or similar condition. Although the first attempt to collect my blood stem cells failed, my experience was overwhelmingly positive as well as fulfilling on a personal level. The foundation running the donation took pains to make it as convenient as possible - and free, other than my time. I recovered quickly and have had no long-term issues related to the donation[1]. I would encourage everyone to at least do the cheek swab to join the registry if they are able. this page to join the Be The Match registry. This post was prompted - very belatedly - by a comment from "demost_" on Scott Alexander's post about his experience donating a kidney[2]. The commenter was speculating about the differences between bone marrow donation and kidney donation[3]. I'm typically a lurker, but I figured this is a case where I actually do have something to say[4]. According to demost_, fewer than 1% of those on the bone marrow registry get matched, so my experience is relatively rare. I checked and couldn't find any other forum posts about being a blood stem cell or bone marrow donor. I hope to shine a light on what the experience is like as a donor. I know EAs are supposed to be motivated by cold, hard facts and rationality and so this post may stick out since it's recounting a personal experience[5]. Nevertheless, given how close-to-home matters of health are, I figured this could be useful for those considering joining the registry or donating. My Donation Experience I joined the registry toward the end of my college years. I don't recall the exact details, but I've pieced together the timeline from my email archives. Be The Match got my cheek swab sample in December 2019 and I officially joined the registry in January 2020. If you're a university student (at least in America[6]), there's a good chance that at some point there will be a table in your commons or quad where volunteers will be offering cheek swabs to join the bone marrow donor registry. The whole process takes a few minutes and I'd encourage everyone to at least join the registry if they can. Mid-December 2020, I was matched and started the donation process. For the sake of privacy, they don't tell you anything about the recipient at that point beyond the vaguest possible demographic info. I think they told me the gender and an age range, but nothing besides. demost_ supposed that would-be donors should be more moved to donate bone marrow than kidneys since there's a particular, identifiable person in need (and marrow is much more difficult to match, so you're less replaceable as a donor). I can personally attest to this. Even though I didn't know much about the recipient at all, I felt an extreme moral obligation to see the process through. I knew that my choice to donate could make a massive difference to this person. I imagined how I would feel if it were a friend or loved one in need or even myself. The minor inconveniences of donating felt doubly minor next to the weight of someone's life. As a college student, I had a fluid schedule. I was also fortunate that my distributed systems professor was happy to let me defer an exam scheduled for the donation date. To their credit, Be The Match offered not only to compensate any costs associated with the donation, but also to replace any wages missed...
Manfromleng analyzes Parallel Jim Culver and his new mechanic, the Spirit deck. I had hoped to release this video weeks ago but it turns out I had a lot to say about Parallel Jim and the new mechanic. TLDR: I give the mechanic an Elder Sign for theme and a Curse token for execution. I'm looking forward to hearing what you think about it! Contact manfromleng@gmail.com.
TLDR: I've been fighting for my life. TW: talks of depression and s**c*de. Listen at your own risk. Watch the Podcast on Youtube: https://www.youtube.com/@WelcomeToTheKingdom Please rate and review the podcast on the platform you're listening on! (only 5 stars accepted!! heheh) Connect on all social media :) @atakoraaa https://www.youtube.com/c/karrenkora for all business inquiries: connect@theraenaeffect.com --- Support this podcast: https://podcasters.spotify.com/pod/show/wttk/support
These are the final days of our first public Elephant In the Room challenge and it's been a dramatic success, for me personally and for many of the 100+ people who are participating. In this week's episode, I share my story – my elephant, my struggles, and what I learned – and draw an important lesson out of it. TLDR: I went in thinking the Elephant was something blocking me from being who I am and living the way I want to live. That's true. But what I learned is that by facing it, I am flooded with a huge amount of creative energy, like some kind of superhero. And I never expected that. If you liked this episode, let us know with a review. And follow this show to get all our latest episodes. Of course, If your really liked this episode, subscribe to our newsletter to never miss an episode, get informed about live events, and do direct Q&A with me: http://momentumlab.com/podcast About The Show: This show, Ten Thousand Heroes, is about aligning our life with our purpose. Most of the time that requires: Confusion Hard work Addressing conflict, and Unconditional love How would we treat each other if we assumed every person we met was a hero in disguise (just like we are)? How would our society and culture change if we did that? We're here to support you and challenge you during that journey. Show Links: Voicemail: https://www.speakpipe.com/10khshow Email: ankur@momentumlab.com 10,000 Heroes: http://10kh.show Apple Podcasts: https://podcasts.apple.com/us/podcast/10000-heroes/id1565667158 Spotify: https://open.spotify.com/show/4Mh2WvODShs6jwIQSd0wp1 YouTube: https://www.youtube.com/@10kheroesshow Elephant In The Room Challenge: http://momentumlab.com Newsletter: http://momentumlab.com/podcast About our sponsor: 10,000 Heroes is brought to you by Momentum Lab. I normally refer to Momentum Lab as an experiment-based coaching program or a goal accelerator. But it's beyond that. It's a deep investigation into Purpose, Vision, and what it takes to achieve our goals in every area of life. If you're interested in falling in love with who you are, what you're doing, or what you're surrounded with, there's two roads: Accepting what is Transforming your situation We help you do both. The best way of learning more is to sign up for our weekly email: (Momentum) Lab Notes http://momentumlab.com/podcast
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Social Alignment Problem, published by irving on April 28, 2023 on LessWrong. TLDR: I think public outreach is a very hopeful path to victory. More importantly, extended large-scale conversation around questions such as whether public outreach is a hopeful path to victory would be very likely to decrease p(doom). You're a genius mechanical engineer in prison. You stumble across a huge bomb rigged to blow in a random supply closet. You shout for two guards passing by, but they laugh you off. You decide to try to defuse it yourself. This is arguably a reasonable response, given that this is your exact skill set. This is what you were trained for. But after a few hours of fiddling around with the bomb, you start to realize that it's much more complicated than you thought. You have no idea when it's going to go off, but you start despairing that you can defuse it on your own. You sink to the floor with your face in your hands. You can't figure it out. Nobody will listen to you. Real Talking To The Public has never been tried Much like the general public has done with the subject of longevity, I think many people in our circle have adopted an assumption of hopelessness toward public outreach social alignment, before a relevant amount of effort has been expended. In truth, there are many reasons to expect this strategy to be quite realistic, and very positively impactful too. A world in which the cause of AI safety is as trendy as the cause of climate change, and in which society is as knowledgeable about questions of alignment as it it about vaccine efficacy (meaning not even that knowledgeable), is one where sane legislation designed to slow capabilities and invest in alignment becomes probable, and where capabilities research is stigmatized and labs find access to talent and resources harder to come by. I've finally started to see individual actors taking steps towards this goal, but I've seen a shockingly small amount of coordinated discussion about it. When the topic is raised, there are four common objections: They Won't Listen, Don't Cry Wolf, Don't Annoy the Labs, and Don't Create More Disaster Monkeys. They won't listen/They won't understand I cannot overstate how clearly utterly false this is at this point. It's understandable that this has been our default belief. I think debating e/accs on Twitter has broken our brains. The experience of explaining again and again why something smarter than you that doesn't care about you is dangerous, and being met with these arguments, is a soul-crushing experience. It made sense to expect that if it's this hard to explain to a fellow computer enthusiast, then there's no hope of reaching the average person. For a long time I avoided talking about it with my non-tech friends (let's call them "civilians") for that reason. However, when I finally did, it felt like the breath of life. My hopelessness broke, because they instantly vigorously agreed, even finishing some of my arguments for me. Every single AI safety enthusiast I've spoken with who has engaged with civilians has had the exact same experience. I think it would be very healthy for anyone who is still pessimistic about convincing people to just try talking to one non-tech person in their life about this. It's an instant shot of hope. The truth is, if we were to decide that getting the public on our side is our goal, I think we would have one of the easiest jobs any activists social alignment researchers have ever had. Far from being closed to the idea, civilians in general literally already get it. It turns out, Terminator and the Matrix have been in their minds this whole time. We assumed they'd been inoculated against serious AI risk concern - turns out, they walked out of the theaters thinking “wow, that'll probably happen someday”. They've been thinking that th...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Social Alignment Problem, published by irving on April 28, 2023 on LessWrong. TLDR: I think public outreach is a very hopeful path to victory. More importantly, extended large-scale conversation around questions such as whether public outreach is a hopeful path to victory would be very likely to decrease p(doom). You're a genius mechanical engineer in prison. You stumble across a huge bomb rigged to blow in a random supply closet. You shout for two guards passing by, but they laugh you off. You decide to try to defuse it yourself. This is arguably a reasonable response, given that this is your exact skill set. This is what you were trained for. But after a few hours of fiddling around with the bomb, you start to realize that it's much more complicated than you thought. You have no idea when it's going to go off, but you start despairing that you can defuse it on your own. You sink to the floor with your face in your hands. You can't figure it out. Nobody will listen to you. Real Talking To The Public has never been tried Much like the general public has done with the subject of longevity, I think many people in our circle have adopted an assumption of hopelessness toward public outreach social alignment, before a relevant amount of effort has been expended. In truth, there are many reasons to expect this strategy to be quite realistic, and very positively impactful too. A world in which the cause of AI safety is as trendy as the cause of climate change, and in which society is as knowledgeable about questions of alignment as it it about vaccine efficacy (meaning not even that knowledgeable), is one where sane legislation designed to slow capabilities and invest in alignment becomes probable, and where capabilities research is stigmatized and labs find access to talent and resources harder to come by. I've finally started to see individual actors taking steps towards this goal, but I've seen a shockingly small amount of coordinated discussion about it. When the topic is raised, there are four common objections: They Won't Listen, Don't Cry Wolf, Don't Annoy the Labs, and Don't Create More Disaster Monkeys. They won't listen/They won't understand I cannot overstate how clearly utterly false this is at this point. It's understandable that this has been our default belief. I think debating e/accs on Twitter has broken our brains. The experience of explaining again and again why something smarter than you that doesn't care about you is dangerous, and being met with these arguments, is a soul-crushing experience. It made sense to expect that if it's this hard to explain to a fellow computer enthusiast, then there's no hope of reaching the average person. For a long time I avoided talking about it with my non-tech friends (let's call them "civilians") for that reason. However, when I finally did, it felt like the breath of life. My hopelessness broke, because they instantly vigorously agreed, even finishing some of my arguments for me. Every single AI safety enthusiast I've spoken with who has engaged with civilians has had the exact same experience. I think it would be very healthy for anyone who is still pessimistic about convincing people to just try talking to one non-tech person in their life about this. It's an instant shot of hope. The truth is, if we were to decide that getting the public on our side is our goal, I think we would have one of the easiest jobs any activists social alignment researchers have ever had. Far from being closed to the idea, civilians in general literally already get it. It turns out, Terminator and the Matrix have been in their minds this whole time. We assumed they'd been inoculated against serious AI risk concern - turns out, they walked out of the theaters thinking “wow, that'll probably happen someday”. They've been thinking that th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum. Link-posted from my blog here. TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy! I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism. Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews. To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I'm trying to capture the objections that my activist friends often have. As it might be apparent, I'm sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn't particularly well understood (in my opinion). Without further ado: Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns. It's crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don't do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum. TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful. Context Together with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview. I wanted to share two resources for EA community builders I've been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I'd now like to share them more widely, so that others can hopefully have the same benefits. EA Eindhoven Syllabi Collection There are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out there I also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don't know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources. When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration. You can find the document here. Community Building Readings I also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community. You can find the document here. Disclaimers for both documents I do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it. These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized. The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories. How to use these documents Using the table of contents or Ctrl + F + [what you're looking for] probably works best for navigation Please feel free to place comments and make suggestions if you have additions! When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they're due and to facilitate others reaching out to the creator if they have more questions. In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl. I hope ...
TLDR: I don't believe it was sickness. Become a monthly partner: https://www.modernday.org/field-workers/Shane.Winnings/ Book me for your next service, conference, event or camp at shanewinnings.com/booking --- Support this podcast: https://anchor.fm/shane-winnings/support
On this episode of the podcast, we're doing something a little different. Considering the time of year, I wanted to take a moment to reflect on a few thoughts I have about the podcast and express my gratitude to all you listeners out there for your attention and your support of the show. Also, I reshare the very first interview that I did for this podcast, starring my good friend Dr. Ally DeGraff (GSS 002). TLDR: I monologue for a few minutes and rebroadcast an old episode. ***************************** Don't forget to leave a rating and review on Apple Podcasts and Spotify.
"TLDR: I am organizing a vacation for fatties. Tell me where you wanna go by completing this survey: https://my.trovatrip.com/public/l/survey/fierce.fatty Today I am telling you all about the dream fatty vacation and what you need to do to tell me about your preferences. Do you want to go to Italy, Ireland, or Iceland? Costa Rica, Cyprus, Cambodia? Fierce Fatty vacation goals: ✈️ A location where the airline has a customer of-size policy
I'm horrible with internet slang, acronyms, phrases…whatever.The first time someone posted TLDR I had to google it.I thought of this the other day when my managers and I were talking about abbreviations and acronyms and lingo that may be helpful to make a list of.I made the joke about how I'm so bad with knowing what those things are…And I mentioned to Michelle, my lead Dietitian, the first time I saw TLDR.She Googled it the second I mentioned it as she didn't know either.Anyway, why am I rambling on about this TLDR…and for those of you about to Google it, it is Too Long Didn't Read….?Because the first time I saw it, it was commented on one of my blog posts.And seeing it oddly made me sadder and madder than any troll comment I could have gotten at the time.I wasn't mad because my articles are freaking fabulous and they were missing out but because they were holding themselves back.I was honestly sad for them.Because in their search for a quick answer, a fast fix, simply WHAT TO DO, they were missing out on the part that would actually allow them to find something that works for them…Something that lead to lasting results…The WHY behind the systems so they could implement them correctly.They weren't open to learning.And their desire to just be told what to do quickly, was probably what kept them working really hard without seeing results….wasting far more time putting in pointless effort than it would have taken them to just embrace the learning and read the post.I bring this up because I know I've even been guilty of this….Skipping around an article, course or program to pull what I think I need.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost]: Huge volcanic eruptions: time to prepare (Nature), published by Mike Cassidy on August 19, 2022 on The Effective Altruism Forum. Lara Mani and I have a comment article published in Nature this week about large magnitude volcanic eruptions: TLDR: I also wrote a twitter thread here: This is more condensed focus piece, but contains elements we've covered in these posts too. This is really the start of the work we've been doing in this area, we're hoping to quantify how globally catastrophic large eruptions would be for our global food, water and critical systems. From there, we'll have a better idea of the most effective mitigation strategies. But because this is such a neglected area (screenshot below), we know that even modest investment and effort will go a long way. We highlight several ways we think could help save a lot of lives both in the near term (smaller, more frequent eruptions) and the in future (large mag and super-eruptions): a) pinpointing where the biggest risks area/volcanoes are b) increasing and improving monitoring c) increasing preparedness (e.g. nowcasting-see below), and d) research volcano geoengineering (the ethics of which we're working with Anders Sandberg on). The last point may interest some others in the x-risk community, as potential solutions like these ones (screenshot below), could potentially help mitigate the effects from nuclear, and asteroid winters too. We're having conversations with atmospheric scientists about this type of research. Another way tech-savy EAs might be able to help with is the creation of 'nowcasting' technology, which again would be useful for a range of Global Catastrophic Risks. The paper has been covered a fair bit in the international media, (e.g.) and we feel like we could use this mometumn to make some tractable improvements to global volcanic risk. If you'd like to help fund our work or discuss any of these ideas with us then get in touch! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Custom iPhone Widget to Encourage EA Forum Use, published by Will Payne on June 27, 2022 on The Effective Altruism Forum. TLDR: I made a little custom iPhone widget for the EA Forum recently and have been getting a lot of value out of it. At the moment I don't feel bothered to make this into an app which is easy to install, but I think people could set it up themselves in ~5 mins or less, and if you don't use the forum much but want to use it more it might help. What is it? I set up a EA Forum widget using a little iPhone personalisation tool called Scriptable. It shows the details of a random front page forum post once per hour. And it looks like this... At the time of writing I haven't read Ozymandias' post Why I think it's good I've noticed that when I interact with the forum I usually enjoy it. However, I usually don't reach for the forum over other easily available sources of entertainment like YouTube, Instagram, or TikTok (although don't worry, I deleted TikTok). I made this widget initially to try and make the Forum an easy reach alternative to these apps. I think it worked; my usage of the forum went from once per month (or less) to once or twice a day. At one point I removed the widget and didn't get around to re-adding it for a few months, and my forum usage dropped down again. Since reinstalling it a few days ago, it has jumped back up. I'd put this then at somewhere around a 10 - 100x increase in value out of the forum for me, but don't know yet if it will work long term or just hold my attention for 1-2 week chunks. How to set it up This works for iPhone and presumably iPad (although I haven't been able to test it on iPad). Download the Scriptable app then open it Click the little plus icon in the top right corner Paste this code into the editor Go to your home screen Long press on the home screen or any app to enter jiggle mode Click the plus in the top left to add a widget Find Scriptable (You can use the search bar) Pick the midsized view and click "Add widget" at the bottom Tap on the widget Click choose script Pick the script you just made (If you didn't actively change the name it should be the only option called "Untitled Script") Tap anywhere else on the screen to exit jiggle mode. You're done! I just ran through this process and it took a little over 1 minute. Here's a video. I hope this helps you as much as it's helped me. Since the EA Forum, LessWrong and the Alignment Forum all use the same code you can replace getForumPosts(EAForum) with getForumPosts(LessWrong) or getForumPosts(AlignmentForum). For what it's worth, I don't advocate copying random code you see on the Internet and running it, but we're all friends here. (Plus I trust the community to have someone who feels like reading through the above and checking it's not horrifically unsafe and confirming in the comments.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
HAPPY TUESSSSSDAYToday we’re covering some news stories about Upstart, Shopify, Okta, and The Trade Desk then diving into my notes from GitLab’s (GTLB) earnings call last week. TLDR I was very impressed, opened a position, and tend to DCA in overtime keeping it a “large” position in my portfolio. BUT FIRSTThis newsletter would not be possible without Commonstock. In my mind, Commonstock is like Twitter without the bots and people trolling you because they think they are the ruler of all opinions on stocks/investing. This isn’t an advertisement, but I love the platform so much that I badgered the team so much over the last couple of years that they basically had to give me a job so I’d stop bothering them.In all seriousness, I joined because the team is focused on improving the world’s financial health by creating a platform that amplifies transparency and accountability alongside people’s opinions on everything from crypto to stocks. My portfolio is linked securely on Commonstock and you can always see my positions, trades, and performance at this link. Portfolio NewsUpstart (UPST) is down 7% after Wedbush downgraded it to underperform with a $75 price target (Down from $110). The analyst noted weakening delinquency trends. This doesn’t concern me at all and I intend to keep dollar-cost averaging (DCA) into UPST so I’ll take lower prices due to short-term fears all day!Shopify (SHOP) was down 12% yesterday after an announcement that Google intends to offer “Last Mile Fleet Solution” to help business owners optimize deliveries/logistics. At this point, I’m not concerned and SHOP will move towards the top of my list for new contributions if prices drop more from here. If we see Shopify’s results suffer in the next few quarters, then it could indicate they’re losing part of their competitive advantage. I’ll research this new offering from Google and share it in a future email.Okta (OKTA) is currently down 8% pre-market after hacking group Lapsus$ posted screenshots on its Telegram claiming to have hacked Okta. Okta has stated it sees no signs of a breach. As of right now, I’m not concerned and I certainly trust Okta’s track record + management to quickly fix anything (and make the platform better) if there was a breach. Okta is currently at the top of my list to buy when my next contributions hit. I talked about that in yesterday’s email. Adobe (ADBE) reports earnings this afternoon. I own a starter position in Adobe because it is a fantastic company with a very long track-record of success. I don’t intend to add to my position because its multiple is a bit high considering forward growth estimates and I believe there are better opportunities elsewhere (see link to yesterday’s email). I’d start to get interested in adding shares if the P/E gets down to around 29-30. It’s currently at a blended P/E of 35.3. That’d be about a 20% drop from here down to around $390/sh. I have no idea what happens with earnings.The Trade Desk (TTD) launches new certified service partner program for SMBs. “This program expands self-service access to The Trade Desk’s demand-side platform, as client demand for data-driven advertising continues to rise. As part of this announcement, Goodway Group becomes The Trade Desk’s first certified service partner to help meet this rising demand.” I own TTD and won’t be selling my shares anytime soon, but I’d love for the multiple to come down a bit before adding. GitLab (GTLB) Conference Call NotesAfter Gitlab’s earnings last week I sent an email saying I intended to buy shares with the new contributions that were coming into my account (link). I ended up purchasing shares on Tuesday and Wednesday and the position is up nicely. It’s currently a large position for me and I have some catching up to do in building other positions so I won’t be buying more in the next month or two but I’m very bullish on the company’s future. Here are my notes from the earnings conference callCEO Sid Sijbrandij opening remarks:Beat revenue expectations with revenue of $77.8 million, 69% year-over-yearDollar-based net retention rate exceeded the 152% we reported in S-1 filing Strength was broad-based across enterprise, mid, and SMB customersUltimate remains fastest growing tier of product offeringStrength indicates the market is moving from DIY DevOps to a DevOps platform which plays to GitLab’s strengthGartner Market Guide for value stream delivery platforms states that by 2024, 60% of organizations will have switched to a platform approach, up from 20% in 2021Management believes “the source of our product differentiation is our platform approach to DevOps.”“We believe our single application helps companies to deliver software faster, improve organizational efficiency and reduce security and compliance risk. The DevOps platform also enables our customers to manage and secure the entire DevOps workflow across any hybrid and multi-cloud environment.”“Acquisition of Opstrace, a pre-revenue open-source observability solution will allow GitLab to offer robust monitoring and observability capabilities that will enable organizations to lower incident rates, increase developer productivity and reduce mean time to resolution.”Management believes DevOps platform addresses a $40B market opportunity. Bain & Company study showed 90% of companies believe DevOps is a priority but only 12% believe they have mature DevOps practicesAdded over 500 net new base customers. New logos/expansions include U.S. Army, Deutsche Telekom and Travis Perkins.Travis Perkins expansion: “Travis Perkins is the U.K.'s largest distributor of building materials. They accelerated their migration to the cloud using GitLab Premium by consolidating their software to it. Doing so increased their velocity, cut down overall cost by 20% and it allowed their team members to focus on building new customer-facing digital services and capabilities instead of managing their toolchain. This quarter, they upgraded from Premium to Ultimate licenses and more than doubled the number of users on the system. This will expand usage of GitLab to their security teams and allow development operations and security to collaborate on a single DevOps platform.”CFO Brian Robbins remarks:Revenue is the key metric to evaluate the health and performance of the business. Because approximately 90% of revenue is ratable, it serves as a predictable and transparent benchmark for how we are growing.“Cohorts from six years ago are still expanding today. This is a testament to how we're constantly adding value to our customers. Most of our customers start using GitLab with small teams with just one to two stages of our platform. From there, they typically increase their spend with us 2x over the first year as our platform is adopted across multiple teams. Customers then continue to increase their spend as our platform expands to more teams across their organizations or they upgrade to a higher paid tier.”“We are also effective at retaining our customers. When our customers deploy the DevOps platform, it becomes a central platform from which all their DevOps workflows originate from, making it sticky and difficult to replace. The result is that we ended our fourth quarter with a dollar-based net retention rate exceeding 152% which is higher than the disclosure we provided in our S-1 at the time of our IPO.”“Our platform is offered with a free version and two paid subscription tiers which we call Premium and Ultimate. Our paid tiers are priced per user with different features per tier. Every user within an organization is on the same plan which helps keep our business model transparent and easy to understand. The Ultimate tier is our fastest-growing tier, now representing 37% of our annual recurring revenue for the fourth quarter compared with 26% of annual recurring revenue in the fourth quarter of FY 2021 and growing in excess of 100%. In FY '22, our non-GAAP gross margin held steady at 89%. Over time, we expect this to compress as our SaaS offering becomes a larger portion of our business and associated hosting costs will increase.”Over 4,500 customers with ARR of at least $5,000 per customer compared to over 4,000 customers in the prior quarter and over 2,700 customers in the prior year. This represents a year-over-year growth rate of approximately 67%. Currently, customers with greater than $5,000 in ARR represent approximately 95% of our ARR.Over 490 customers with ARR of at least $100,000, up from 420 customers and over 280 customers compared to the prior quarter and year, respectively. This represents a year-over-year growth rate of approximately 74%39 customers with ARR of at least $1 million compared to 20 customers at the end of the prior fiscal year which represents a year-over-year growth rate of 95%.Total RPO grew 95% year-over-year to $312 million.Non-GAAP gross margins were 89% for the quarter which compares to 90% in the immediately preceding quarter and 89% in the fourth quarter last year. As we move forward, we're estimating a moderate reduction in this metric due to the rapid year-over-year growth rate of our SaaS offeringNon-GAAP operating loss was $27.4 million or negative 35% of revenue compared to a loss of $22.2 million or negative 48% of revenue in Q4 of the last fiscal year. Q4 includes $5 million of expenses related to our JV and majority-owned subsidiary.Guidance:For first quarter of FY 2023, we expect total revenue of $77 million to $78 million, representing a growth rate of 54% to 56% year-over-year.We expect a non-GAAP operating loss of $38.5 million to $37.5 million.We expect a non-GAAP net loss per share of $0.28 to $0.27, assuming 147 million weighted average shares outstanding.For the full year FY 2023, we expect total revenue of $385.5 million to $390.5 million, representing a growth rate of 53% to 55% year-over-year.We expect a non-GAAP operating loss of $142 million to $138 million.We expect a non-GAAP net loss per share of $1.02 to $0.97, assuming 148 million weighted average shares outstanding.Our annual FY 2023 guidance implies non-GAAP operating margin improvement of almost 300 basis points year-over-year at the midpoint of our guidance ranges. Over the longer term, we believe that a continued targeted focus on growth initiatives and scaling the business will yield further improvement in unit economics.Q&AQuestion about where strength in 152% net-dollar retention rate is coming fromAnswer: Ultimate is driving a lot of upsell because of the advanced security capabilities. Create and verify are used heavily; so they are important too. Apart from create, verify and secure, we see growth in packaging and release. Packaging, for example, is replacing JFrog Artifactory at more and more customers.Question on acquisition of Opstrace, the pre-revenue open source application in the category of observability.Answer: Observability lets you close the loop. You plan to make something, you make it, you roll it out and then you see how it does. So it's a really important element of a DevOps platform and we're really early. But because it's so important, we wanted to accelerate how fast we got there and we love the team and the product that Opstrace already built. So we acquired them to rebuild that inside GitLab and we think that closing that loop will help our customers achieve better business outcomes. If you get feedback faster, you reduce your cycle time and you get better outcomes. And it's really important to note that we'll do it in an iterative fashion. So in the beginning, our solution will not so much compete with existing vendors but with nonconsumption. People haven't set up monitoring yet and we'll start from a simple product and work with our users and customers contributing to expand the functionality over time.Links to company news articlesUpstart downgraded by WedbushShopify - Google “Last Mile Fleet Solution”Okta potential security breachAdobe Investor Relations PageThe Trade Desk Launches New Certified Service Partner Program for Small and Medium-Sized Businesses This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit austin.substack.com/subscribe
Let's talk about the elephant in the room. You know you need a signature service. You're working on your mindset. And yet you've been burned. You've invested in courses or programs where you didn't learn much (or what you did learn was…. suspect). You felt your money was wasted. And I first want to be really clear here that I know some of you have 100% wasted money and it's been a sucky experience. What I want to offer you however, is a way to not look at it as a mistake, but rather, a reframe. For any of these “mistakes” you may be regretting, can you ask yourself: You didn't get the success or outcome you wanted; what lesson did you learn? I'm gonna share a personal story about a time I invested $7,500 in a coaching program. And I want to be clear this wasn't a mistake, I just didn't get the immediate ROI that most people might expect from a program like this. TLDR: I invested $7,500 in a 1:1 coaching program at the end of 2020, and a prior $3k investment in 2018 in the same area (launching an online business). I did not get the launch results we expected - I got: full launch assets, all my emails written for me, knowledge in doing FB ads, got clear on my offer and program + who exactly I'm helping with my programs and how. In addition, I became so much more confident in my social media messaging, in showing up online, in sharing and becoming committed to my unique transformation. Since then, I've tweaked, updated my messaging, and made a lot of moves. In that time, between 1:1 clients and group clients, I've made $22,475. I got my financial ROI - just 1 year later than I expected. [list to the podcast for the full version!] My question to you: how are you weighing your ROIs (return on investment)? If you are ONLY relying on the financial ROI, then you may be setting yourself up for disappointment because that is not something you, or the program you are in, can control. How are you weighing your ROIs? Just like you invest money in healthy food, beautiful pieces of clothing, therapy to get different ROIS, your investment in programs may give different ROIs. And sometimes, things take time. You don't expect a six pack after one day of doing abs for 5 minutes. (I mean, it'd be nice). You don't expect your health to improve after eating one apple. You don't expect your relationship to be happily ever after over 2 sessions. Sometimes things take time. Sometimes you've wasted your money. And sometimes, you need to reframe how you're thinking about it. Your options if you feel like you've made a money mistake: Reframe it: what did you learn? Choose to do research on something before you buy - testimonials, do a call, ask people who have been in it Understand no one's program is going to be your magic bullet (esp if you aren't committed to doing the work - i know no one that is more committed than me lol -) Long game thinking! rinse + repeat. don't give up.
TLDR: I penned a diss track to Blazing Squad after they remixed a Bone Thugs song but realised I sounded like a hater and I'd get ignored so I didn't invest time or money into it. --- Send in a voice message: https://anchor.fm/0800yofamwhateverinnit/message Support this podcast: https://anchor.fm/0800yofamwhateverinnit/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Builder Writing Contest: $20,000 in prizes for reflections, published by Akash on March 12, 2022 on The Effective Altruism Forum. TLDR: I am running a writing contest for community builders. The judges and I are looking for reflections that achieve one of the following goals: Help you improve your community-building efforts (e.g., by gaining clarity on your theory of change, planning your career, or reflecting on an event/project you recently completed). Help others improve their community-building efforts (e.g., by sharing a strategy, lesson, or story that others would benefit from). Note that we are especially excited about submissions that focus on longtermist community-building. However, reflections that focus on general community-building efforts and neartermist community-building efforts will also be considered. EDIT: Submissions received by April 30, 2022 will be reviewed (note that submissions received by March 31, 2022 will be eligible for early-bird prizes). Each person (or team) can submit up to three entries. Note that the deadline has been extended in response to feedback. What kinds of submissions are you looking for? Broadly, we are looking for submissions that either help you or help others improve as a community-builder. Reflections that help you: We understand that the prompt “write reflections that help you become more impactful” is rather broad. Here are some examples of things we'd be excited to see: Write down your theory of change Reflect on a key uncertainty Find ways to achieve your goals more effectively Imagine a future version of yourself two years from now that is 10X more impactful than your present self Reflect on career aptitudes you're hoping to test/develop (see also this exercise) Engage in structured career planning Take a project idea and consider if there are ways to pursue it 10X more ambitiously or 10X faster. Apply goal factoring (or other techniques) to help you make a decision Identify bugs that currently reduce your productivity. Reflections that help others: Experienced community builders often have access to models and strategies that could help others, but these insights often don't get shared widely. Examples in this category include posts like Lessons Learned from Running Stanford EA and SERI, Get in the Van, and Simplify EA Pitches to “Holy Shit, X-Risk”. (Note that these are just examples– you can absolutely submit entries that use a different style. As a general rule, do whatever might help you or others become more impactful). What are the prizes? We will distribute up to $20,000 in prizes. 1st place- $3,000 2nd place- $2,000 3rd place- $1,000 4th and 5th place- $750 each Other high-quality submissions- $500 each Top 3 submissions received by March 31- $1000 each If we receive more than 30 high-quality submissions, we may expand the prize pool to reward additional semifinalists. EDIT: We will be awarding three early-bird prizes ($1000 each) to the three best submissions received by March 31. Who is eligible to participate? Community-builders (individuals involved in building the effective altruism community) will be best-suited for this contest, but anyone is eligible to participate. How will submissions be judged? The judging panel will consist of me (Akash Wasil) and a panel of experienced movement builders. Broadly, we are looking for submissions that demonstrate high-quality reasoning, truth-seeking, and an impact-oriented mindset. Here are some tips: Demonstrate strong reasoning transparency. Acknowledge why you believe what you believe (note that in some cases this might be “because other people who I trust say so” or “because I have observed this at several student group meetings.”) Indicate how confident you are in your claims and which of your claims are most important. Write about topics that ar...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Can Get Fluvoxamine, published by AppliedDivinityStudies on January 18, 2022 on LessWrong. [TLDR: I paid $95 for a 10 minute video consultation with a doctor, told them I was depressed and wanted fluvoxamine, and got my prescription immediately.] I'm not a doctor, and this isn't medical advice. If you want information on the status of fluvoxamine as a Covid treatment, you can see the evidence base in the appendix, but interpreting those results isn't my business. I'm just here to tell you that if you want fluvoxamine, you can get it. Years ago, some of my friends were into downloading apps that would get you a 10 minute consultation with a doctor in order to quickly acquire a prescription for medical marajuana. Today, similar apps exist for a wide range of medications, and with a bit of Googling, you can find one that will prescribe you fluvoxamine. What's required on your end? In my case, $95, 10 minutes of my time, and some white lies about my mental health. Fluvoxamine is only prescribed right now for depression and anxiety, so if you want it, my advice is to say that: You have an ongoing history of moderate depression and anxiety You have taken Fluvoxamine in the past, and it's helped And that's basically it. Because there are many other treatments for depression, you do specifically have to ask for Fluvoxamine by name. If they try to give you something else, say that you've tried it before and didn't like the side effects (weight gain, insomnia, headaches, whatever). One more note, and this is critical: unless you are actually suicidal, do not tell your doctor that you have plans to commit suicide, to hurt yourself or others, or do anything that sounds like an immediate threat. This puts you at risk of being put involuntarily in an inpatient program, and you don't want that. Finally, you might ask: isn't this super unethical? Aren't you not supposed to lie to doctors to get drugs? Maybe, I don't know, this isn't medical advice, and it's not really ethical advice either. I think the only real potential harms here are we consume so much fluvoxamine that there isn't enough for depressed people, or that doctors start taking actual depressed patients who want fluvoxamine less seriously. As far as I can tell, there isn't currently a shortage, as to the latter concern, I couldn't really say. Appendix Again, this isn't medical advice. You shouldn't take any of these results or pieces of news coverage as evidence that fluvoxamine works and that the benefits outweigh the costs. I'm literally only adding this to cover my own ass and make the point that fluvoxamine is a normal mainstream thing and not some weird conspiracy drug. Here's the Lancet article, and the JAMA article. Here's Kelsey Piper at Vox: One medication the TOGETHER trial found strong results for, fluvoxamine, is generally used as an antidepressant and to treat obsessive-compulsive disorder. But it appears to reduce the risk of needing hospitalization or medical observation for Covid-19 by about 30 percent, and by considerably more among those patients who stick with the 10-day course of medication. Unlike monoclonal antibodies, fluvoxamine can be taken as a pill at home --- which has been an important priority for scientists researching treatments, because it means that patients can take their medication without needing to leave the home and without straining a hospital system that is expected to be overwhelmed. "We would not expect it to be affected by which variants" a person is sick with, Angela Reiersen, a psychiatrist at Washington University in St. Louis whose research turned up fluvoxamine as a promising anti-Covid candidate, told me. And here's a Wall Street Journal article headlined "Is Fluvoxamine the Covid Drug We've Been Waiting For?" with subheading "A 10-day treatment costs only $4 and app...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You Can Get Fluvoxamine, published by AppliedDivinityStudies on January 18, 2022 on LessWrong. [TLDR: I paid $95 for a 10 minute video consultation with a doctor, told them I was depressed and wanted fluvoxamine, and got my prescription immediately.] I'm not a doctor, and this isn't medical advice. If you want information on the status of fluvoxamine as a Covid treatment, you can see the evidence base in the appendix, but interpreting those results isn't my business. I'm just here to tell you that if you want fluvoxamine, you can get it. Years ago, some of my friends were into downloading apps that would get you a 10 minute consultation with a doctor in order to quickly acquire a prescription for medical marajuana. Today, similar apps exist for a wide range of medications, and with a bit of Googling, you can find one that will prescribe you fluvoxamine. What's required on your end? In my case, $95, 10 minutes of my time, and some white lies about my mental health. Fluvoxamine is only prescribed right now for depression and anxiety, so if you want it, my advice is to say that: You have an ongoing history of moderate depression and anxiety You have taken Fluvoxamine in the past, and it's helped And that's basically it. Because there are many other treatments for depression, you do specifically have to ask for Fluvoxamine by name. If they try to give you something else, say that you've tried it before and didn't like the side effects (weight gain, insomnia, headaches, whatever). One more note, and this is critical: unless you are actually suicidal, do not tell your doctor that you have plans to commit suicide, to hurt yourself or others, or do anything that sounds like an immediate threat. This puts you at risk of being put involuntarily in an inpatient program, and you don't want that. Finally, you might ask: isn't this super unethical? Aren't you not supposed to lie to doctors to get drugs? Maybe, I don't know, this isn't medical advice, and it's not really ethical advice either. I think the only real potential harms here are we consume so much fluvoxamine that there isn't enough for depressed people, or that doctors start taking actual depressed patients who want fluvoxamine less seriously. As far as I can tell, there isn't currently a shortage, as to the latter concern, I couldn't really say. Appendix Again, this isn't medical advice. You shouldn't take any of these results or pieces of news coverage as evidence that fluvoxamine works and that the benefits outweigh the costs. I'm literally only adding this to cover my own ass and make the point that fluvoxamine is a normal mainstream thing and not some weird conspiracy drug. Here's the Lancet article, and the JAMA article. Here's Kelsey Piper at Vox: One medication the TOGETHER trial found strong results for, fluvoxamine, is generally used as an antidepressant and to treat obsessive-compulsive disorder. But it appears to reduce the risk of needing hospitalization or medical observation for Covid-19 by about 30 percent, and by considerably more among those patients who stick with the 10-day course of medication. Unlike monoclonal antibodies, fluvoxamine can be taken as a pill at home --- which has been an important priority for scientists researching treatments, because it means that patients can take their medication without needing to leave the home and without straining a hospital system that is expected to be overwhelmed. "We would not expect it to be affected by which variants" a person is sick with, Angela Reiersen, a psychiatrist at Washington University in St. Louis whose research turned up fluvoxamine as a promising anti-Covid candidate, told me. And here's a Wall Street Journal article headlined "Is Fluvoxamine the Covid Drug We've Been Waiting For?" with subheading "A 10-day treatment costs only $4 and app...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pedant, a type checker for Cost Effectiveness Analysis, published by Hazelfire on January 2, 2022 on The Effective Altruism Forum. Effort: About 80+ hours of dev work, and a little bit of writing. This project was done under a grant from EA Funds. I would like to give thanks to Ozzie Gooen, Nuño Sempere, Chris Dirkis, Quinn Dougherty and Evelyn Fitzgerald for their comments and support on this work. Intended Audience: This work is interesting for: People who do evaluate interventions, or want to get into evaluating interventions, as well as people looking for ambitious software engineering projects in the EA space. People looking to evaluate whether projects similar to Pedant are worth funding. Conflict of Interest: This report is intended as a fair reporting on the work I've currently done with Pedant, including its advantages and disadvantages. However, I would love to be funded to work on projects similar to Pedant in the future, with a bias to the fact that it's what I'm good at. So as much as I will try to be unbiased in my approach, I would like to declare that this report may have you see Pedant through rose coloured glasses. Tldr: I am building a computer language called Pedant. Which is designed to write cost effectiveness calculations. It can check for missing assumptions and errors within your calculations, and is statically typed and comes with a dimensional checker within its type checker. State of Cost Effectiveness Analysis When we decide what intervention to choose over another, the one of the gold standards is to be handed a cost effectiveness analysis to look over, showing that your dollar goes further on intervention A rather than B. A cost effectiveness analysis also offers the opportunity to disagree with a calculation, to critique the values of parameters, and incorporate and adjust other considerations. However, when looking at EA's CEAs in the wild, there are many things that are lacking. I'm going to detail what I see as problems, and introduce my solution that could help improve the quality and quantity of CEAs. The Ceiling of CEAs When taking a look at the CEAs that founded EA, particularly GiveWell's CEAs, as much as they are incredible and miles ahead from anything else we have, I can identify a collection of improvements that would be lovely to see. Before going further, I need to add the disclaimer that I'm not claiming that GiveWell's work is of low quality. What GiveWell has done is well and truly miles ahead of its time, and honestly still is, but that doesn't mean that there are some possible improvements that I can identify. Furthermore, I may have a different philosophy than GiveWell, as I would definitely consider myself more of a Sequence Thinker rather than a Cluster Thinker. The first and most clear need for improvement is that of formally considering uncertainty. GiveWell calculations do not consider uncertainty in their parameters, and therefore do not consider uncertainty in their final results. GiveWell's discussion of uncertainty is often qualitative, saying that deworming is “very uncertain”, and not going much further than that. This issue has been identified, and Cole Haus did an incredible job of quantifying the uncertainty in GiveWell CEAs. This work hasn't yet been incorporated into GiveWell's research. Considering uncertainty in calculations can be done within Excel and spreadsheets, and some (particularly expensive) industrial options such as Oracle Crystal Ball and @RISK are available. Currently, Guestimate is a great option for considering uncertainties in a spreadsheet like fashion, created by our own Ozzie Gooen. The next few issues are smaller and much more pedantic. When working with classical tools, it's very easy to make possible errors in calculations. Particularly, looking through GiveDirectly's Cost Eff...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: We need alternatives to Intro EA Fellowships, published by Ashley Lin on the effective altruism forum. Thanks to Kuhan Jeyapragasan, Akash Wasil, Olivia Jimenez, and Lizka Vaintrob for helpful comments / conversations. TLDR: I think group organizers have become anchored on the idea of Intro EA Fellowships as 8-week small-group things, which might actually be a sub-optimal way to introduce promising students to the EA world. We need new alternatives that are exciting, immersive, and enable EA-interested students to move as quickly as they'd like through the EA funnel. The Intro EA Fellowship (also known as the Arete Fellowship) is a program where a small group of fellows and a facilitator meet multiple times to learn about some aspect of effective altruism. Stanford EA's virtual Intro EA Fellowship was the first structured EA program I was part of and when I realized EA was a thriving community with real humans in it. I don't think my experience is unique. In uni EA groups around the world, the Intro EA Fellowship is one of the core programs offered to students. Some context on Intro EA Fellowships: Fellowships are usually structured in cohorts of 2-5 fellows and a facilitator who meets weekly for 6-10 weeks. Each week, fellows do some amount of readings and participate in a discussion with their cohort. There are often also social activities that allow fellows to get to know each other better and cultivate friendships. Uni group organizers often use Intro EA Fellowships as an early/mid part of the funnel for highly-engaged EA students. Intro EA Fellowships exist in so many places because they have some upsides, some of which I list below. That said, I also think there are some strong downsides to the Intro EA Fellowship model as it currently exists. As a participant, I wasn't particularly impressed by my Intro EA Fellowship experience -- I'm not sure if my fellowship cohort actually finished (people got busy) and think there's a chance I would've bounced off the EA community had I not attended a summer EA retreat for high school students a couple months later. Now, as I help organize Penn EA's Intro Fellowship cohort, I'm noticing how my frustrations as an Intro Fellowship participant weren't unique to me. In this post, I share some of those frustrations (downsides to the Intro Fellowship as I see them). I try to steelman the argument for why Intro EA fellowships should exist. Finally, I introduce some Intro EA Fellowship alternatives that I'd be excited to see uni group organizers prototype. I'm personally planning to try some of these -- if you'd like to coordinate, please reach out ashley11@wharton.upenn.edu! Downsides of Intro Fellowships Much of this is observed through my own experience being part of / facilitating fellowships. I also think Intro Fellowship experiences are highly variable depending on one's cohort and facilitator, and I think there's a good chance I've had a relatively worse Intro EA Fellowship experience than others: The standard 8-week fellowship timeline is too slow for people who are really excited early on and want to move faster. In these scenarios, the fellowship might actually slow people down and cause their excitement to fade (an hour-long conversation and some readings each week is a pretty sluggish pace) -- worse case scenario, it might cause some promising people to lose interest. I want to cultivate a fast-paced vibe among fellows, instead of a discussion-group like vibe. (To me “fast-paced,” when applied to something that seems interesting, cultivates genuine excitement and deep curiosity). For example, when I first found 80,000 Hours, I pulled an all-nighter reading it and was absolutely ecstatic that people had spent so much time thinking about triaging problems and how one can do the most good in the world. In my early EA days...
In this episode, Dave and Jamison answer these questions: Questions This question came with a delightful ASCII-art diagram that I will now dictate as follows: “pipe space space space space” JK TLDR: I want to move up the ranks but I’m not sure what might await me… except meetings. What should I expect? And how do I get there? Too Small, Want MOAR! I work in a big enterprise as a Tech Lead in an ““agile team””. So day-to-day I focus on getting our team to build the current feature we’re meant to be building (eg by helping other devs, attending meetings, and sometimes writing code). The next step for my career would be what we call an “Engineering Lead” but I’m having a hard time figuring out what that role actually is and our “EL” is so slammed with meetings I’m afraid to take any of their time to ask… SO - Dave & Jamison, can you enlighten me? What might the goals and life be of someone at that level and how would someone who still codes every day(ish) start figuring out what to do to get there? P.S. It’s taken me about 4 years but I’ve finally managed to listen to every single SSE episode! (I have a kid, binging podcasts isn’t possible for me). P.P.S. In an interview recently I was asked ““What’s the most valuable piece of advice you were ever given?”” to which I replied ““To negotiate for better benefits in job interviews, got it from a podcast called ‘Soft Skills Engineering’””. The interviewer thought that was cool, subscribed to your podcast during the interview then REFUSED TO NEGOTIATE ON ANYTHING! >:( Living in a small town my options as a software engineer have been limited to working for one company straight out of uni for 7 years. Wanting to develop in my career, and knowing you have advised others in the past to move on from their first job out of uni. What is your opinion of seeking out and switching jobs into remote work? Will this provide the same development value found in a traditional job switch, especially after the impact COVID has had on the way companies see remote work.
'What on earth is Clubhouse?' I hear you ask. Well, it's kind of like being in a Whatsapp room with Richard Branson...and instead of typing you're all just leaving live voicenotes. It's like a Live Podcast and you get to contribute (sometimes). It's also like a cult - it's super addictive and invitation-only (at the time of publishing this). https://www.youtube.com/watch?v=s9POp8N9-Os Kevin Rose - who helps Dentists 'Think' My favourite thing about Clubhouse is that sometimes you're not in the right state or environment to be on video - this audio-only platform has gained a lot of popularity! I have seem some great Rooms (like a Whatsapp group) within Dentistry where a lot of knowledge bombs have been dropped. There is something beautiful about Live content that is difficult to get a replay for - the FOMO factor is real! In this Interference Cast I am joined by Kevin Rose who helps drive better conversations in Dentistry. TLDR: I think Dentistry has a home in Clubhouse - we can learn and share great content (live podcast, right?) - we can also use it to change public perceptions of Dentistry. It's probably not going to land you many patients in your chair, if that's why you're on it. You need to see a bigger picture!
I don't introduce myself in these blogs or episodes, which may leave some of you wondering, "Who is this Lance person, and why in the world should I take his advice on startups?" I thought So, that's the topic for today.TLDR:I have been an entrepreneur where I ran my own business for 13 years. I then stayed on as Chief scientist for the acquiring company, where I helped develop their technology, delivered sales presentations, managed their PR, and ran their marketing department.Since 2012, I've also been an active angel investor and startup mentor.Before all that, I was an astrophysicist, so I bring a scientific approach to the startup process. The longer versionI grew up in an academic household where both my parents were professors. One of them was a physicist and the other a sociologist. From the age of six, I planned to follow my father footsteps to becoming a physicist I went to graduate school at UCSD to study astrophysics. I was working with the Hubble Space Telescope and the Keck, trying to understand the early Universe. In my spare time, I started dabbling around with cryptography, privacy systems, and building anonymous email systems. About the time that started to take off and get exciting, I realized that the Hubble Space Telescope wasn't big enough to answer the questions I was trying to ask. That was getting frustrating, so I put my Ph.D. on hold and founded what became Anonymizer, a company focused on consumer internet anonymity. It allowed people to avoid all tracking on the Web. I grew the business for several years, but we began to hit a plateau around 2000. And, of course, 2000 was an exciting time to be in an unfunded startup. That was when the .COM collapse happened all the fundraising dried up. It was touch and go to survive at all.After that, there was the 9/11 attack on the Twin Towers and the Pentagon. We, like everyone else, started to wonder what part we could play. We started reaching out to people that we'd met in the government, mostly in the FBI, because they kept subpoenaing us for records on our Anonymous users. We were able to talk to them about how they were conducting online undercover operations, and we pivoted to focus on building covert operational platforms for the national security community. This model was very successful. These had extreme pain points and were willing to pay a lot to have them solved, and we were the only people around doing it. Between 2001 when we started selling to the government, and 2006 that segment of our business went from about 1% to more than 95 percent of our total revenues. Around that time, we realized we didn't have the background or government connections to take the business where we wanted to go. So we started looking at being acquired by a Beltway Insider, and in 2008 we had an excellent exit to a small systems integrator. The founders of that company were all former Spooks and had the connections, knowledge, and understanding to take the solution where I couldn't. But, being spooks, they weren't going to talk to the media, so I ended up becoming the face of the company that bought mine.I did all the pr. I did most of the public speaking. I wrote the company blog, and it was my face & voice any time we needed to speak publicly.After a few years of being the Chief Scientist for this company, I decided I didn't want to live in the DC area anymore, so I moved out to Wine Country in California, where I could telecommute.That was I started to get involved in Angel Investing. One of the first things I did when I moved out here was join the North Bay angels and get involved in a startup mentoring program. Helping startups became a passion of mine. I discovered that I loved working with and helping these early-stage companies achieve success.At this point, I don't need more outward trappings of success. I'm enjoying living on a hilltop next to my vineyard with beautiful views and an excellent wine cellar. Now I'm much more interested in giving back to other companies. The great thing about advising is it provides most of the fun of being a Founder without the hundred-hour work weeks and constant existential dread.Shortly after joining the North Bay Angels, they invited me to be on the board and their selection committee. The committee is the group within the North Bay angels that looks at all of the applicant companies and decides which will present to the entire group. That is a great experience because I get to see so many different pitches. These are not the finely polished best of the best. I see many rough presentations, which helps me know what makes the best of them shine.A problem with the in-person advising was I could only meet a limited number of companies, and I wanted to help a vastly larger number of Founders. That's why I created Feel the Boot as a platform where instead of doing Just one-on-one advising, I could put this information out on the Web where would be accessible to anyone. Then, if they needed more specific individual coaching, they could seek me out, and I'd be able to do that.At the beginning of 2020, I walked away from my role as Chief Scientist to focus full-time on advising.Later in 2020, I joined the Founder Institute. I reached out to them, and they offered to make me a global entrepreneur in residence. That means I'm advising their companies everywhere in the world. One of the great things about this is I talk to many companies with different issues and problems. I am continually learning from the experiences of every founder I help.The advice I give through Feel the Boot comes from my history as a founder, my experiences as an investor, and learning from one-on-one advising, consulting, and BoD work with founders.But everything always comes back to my experiences as an academic and a scientist. It shapes the way I think about everything. I am always looking for patterns. Why do startups work the way they work, and how can we understand them at a fundamental level. I want to get founders away from the "cargo cult" approach where they think that if they emulate a pattern, it should work because it worked for someone else. I want to get to the fundamental why and how. Think deeply about what you're doing so that you can take what's unique about your business and put it in the best light, and leverage it in the best possible way.I question whether this will be useful to anyone, but hopefully, this gives you some idea of where I come from, why I'm passionate about startups, and why the things I say hopefully carry some weight.Till next time … Ciao.
TLDR: “I skate to where the puck is going to be, not to where it has been.” - The Great One. https://fourweekmba.com/fang-companies/
Rob and Chris give an important update regarding the Coronavirus outbreak from their respective bunkers. Chris discovers a new underground hip hop artist on youtube. Part nine of an ongoing series of stupidity in a time of uncertainty. TLDR: I would do pills to this.
Rob and Chris learn about the trial of Socrates and the four men he considered friends. TLDR: I loved that pig.
This week the guys learn about the secret death of Alexander the Great. First his unborn fetus son and "special" son, Phillip, are in charge. But then they get killed and Alex's best friends kept things "business as usual", by splitting up the kingdom. TLDR: I need my Bud Light girls almost falling over.
Rob and Chris continue to learn about ancient Greece, specifically the crazy Spartans. TLDR: I will show you how Spartan I am!
Rob and Chris learn more about the Assyrians, Babylon, and the the singular cultural identity of the Greeks. TLDR: I brought wine, suck my dick!
Rob and Chris talk about baptizing Couper on this special bonus episode. TLDR: I just want to make sure he gets into heaven.
Rob and Chris learn about the earliest Indian civilizations, except that that there's only one early book and it doesn't really say much. TLDR: I don't want to go back to Utah.
In Part III of Chapter 13, Rob and Chris learn about the different types of Muslims and what's in store for modern Islam. The plotline of the new Guardians of the Galaxy sequel is revealed by the Sunnis and it is not what you expected. Two groups of Shi'ites known as the Seveners and Twelvers believe there are divine imams hiding somewhere in the galaxy and they could be literally anywhere. One of the most influential muslim scholars, Abu-Hamid al-Ghazali, decides to abandon his job and family in an attempt to get closer to God through poverty and his wife says good riddance. Sufis began to organize themselves into spiritual fraternities and their pledges, known as fakir or dervish, were put through some pretty degrading tests. Islam suffered for a while by not accepting Industrialization while the rest of the world passed them by, but then saw a resurgence at the end of WWI as the rest of world's powers became more and more dependent on their large supply of crude oil. The Muslim calendar is extremely confusing, but that's just how they like it. TLDR: I don't bet on much, but I always bet on Hanifites.
This week the guys are supposed to learn about electronic technology. But they don't. First the are introduced to a self aware gremlin robot. Be careful, though he'll bathe in your blood. No body beats the WIZ when you're making your dick medium girth with Zenith Dick pills! The guys go to the dopest bachelor party ever with their micro dick stripper. Chris gets mad when the ice cream man doesn't sell him fries, movie tickets and music requests. Swiss army knife is the only thing you need to live. And they discuss their favorite game show, To Catch a Predator! Wow that's how a CD is made… who cares. And the plasma TV industry is conspiracy man! TLDR: I will bathe in your blood
This week The Dunce Caps talk about heat and heat energy. Rob and Chris get mad at their grandpas. They also don't appreciate the way this book talks down to them. The guys consider getting nipple piercings, like their favorite actor Guy Fieri. An agreement is made about what the first real experiment will be; jarring a fart. We meet a sad family, who is missing their mother. They all speak Italian, except they're Irish. Who the fuck eats fruit bar!? We want fudge-pops! Yoohoo versus Nesquick. Yoohoo always wins. Rob was raised on Choco-Tacos and the ice cream man. Chris and Rob are fucking genius inventors! Shredded cheese bag and the to-go butter thermos. The guys reminisce about eating snacks at the movie theater. Rob is a master cheese chef, with hundreds of Velveeta recipes. Chris opens a white trash, all American bar restaurant. TLDR: “I go to the movies for the snacks.”
TLDR I got extremely sick at Disney World from dehydration and have learned my lesson to not push too hard, too often. Taking it easy without burning the midnight oil and will be back with a FULL recap of my Disney World and Universal Orlando trip (6 parks and 3 hotels in 7 days, no wonder lmao) THURSDAY! one day later than normal!!! ok ttyl love you byeOur Sponsors:* Buy the coolest stuff (at the best prices!) at Quince: quince.com/amusingSupport this podcast at — https://redcircle.com/very-amusing-with-carlye-wisel/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy