POPULARITY
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
Our computer system has gone down, and we are in the process of rebuilding it. This is taking time, and we are almost done. New episodes of your favorite Major Spoilers Shows will appear on Tuesday April 16 2024. Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at http://patreon.com/MajorSpoilers. It will help ensure the Major Spoilers Podcast continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF)
José Luis Cova & Simón Petit present: JazzTaBueno 32/2023 *FUTURE PEOPLE* 01. LISA RICHARDS – EVERY STAR 02. ALABAMA SHAKES – FUTURE PEOPLE 03. JOSS STONE- RIGHT TO BE WRONG 04.MASSIVE ATACK – BLACK MILK 05. H.E.R. – CARRIED AWAY: 06. FKJ – SO MUCH TO ME 07. THE URBAN RENEWAL PROJECT- MY OWN WAY –Feat : AUBREY LOGAN 08. OZ NOY – ICE PICK 09. MAYSA - LOVIN' YOU IS EASY Our Production music is new and innovative in many ways. Is also engaging and inspiring our loyal public radio family with the current explosion of talent and creativity across the spectrum of jazz and related musics.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Princeton course on longtermism, published by Calvin Baker on September 2, 2023 on The Effective Altruism Forum. This semester (Fall 2023), Prof Adam Elga and I will be co-instructing Longtermism, Existential Risk, and the Future of Humanity, an upper div undergraduate philosophy seminar at Princeton. (Yes, I did shamelessly steal half of our title from The Precipice.) We are grateful for support from an Open Phil course development grant and share the reading list here for all who may be interested. Part 1: Setting the stage Week 1: Introduction to longtermism and existential risk Core Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury. Read introduction, chapter 1, and chapter 2 (pp. 49-56 optional); chapters 4-5 optional but highly recommended. Optional Roser (2022) "The Future is Vast: Longtermism's perspective on humanity's past, present, and future" Our World in Data Karnofsky (2021) 'This can't go on' Cold Takes (blog) Kurzgesagt (2022) "The Last Human - A Glimpse into the Far Future" Week 2: Introduction to decision theory Core Weisberg, J. (2021). Odds & Ends. Read chapters 8, 11, and 14. Ord, T., Hillerbrand, R., & Sandberg, A. (2010). "Probing the improbable: Methodological challenges for risks with low probabilities and high stakes." Journal of Risk Research, 13(2), 191-205. Read sections 1-2. Optional Weisberg, J. (2021). Odds & Ends chapters 5-7 (these may be helpful background for understanding chapter 8, if you don't have much background in probability). Titelbaum, M. G. (2020) Fundamentals of Bayesian Epistemology chapters 3-4 Week 3: Introduction to population ethics Core Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Read sections 4.16.120-23, 125, and 127 (pp. 355-64; 366-71, and 377-79). Parfit, Derek. 1986. "Overpopulation and the Quality of Life." In Applied Ethics, ed. P. Singer, 145-164. Oxford: Oxford University Press. Read sections 1-3. Optional Remainders of Part IV of Reasons and Persons and "Overpopulation and the Quality of Life" Greaves (2017) "Population Axiology" Philosophy Compass McMahan (2022) "Creating People and Saving People" section 1, first page of section 4, and section 8 Temkin (2012) Rethinking the Good 12.2 pp. 416-17 and section 12.3 (esp. pp. 422-27) Harman (2004) "Can We Harm and Benefit in Creating?" Roberts (2019) "The Nonidentity Problem" SEP Frick (2022) "Context-Dependent Betterness and the Mere Addition Paradox" Mogensen (2019) "Staking our future: deontic long-termism and the non-identity problem" sections 4-5 Week 4: Longtermism: for and against Core Greaves, Hilary and William MacAskill. 2021. "The Case for Strong Longtermism." Global Priorities Institute Working Paper No.5-2021. Read sections 1-6 and 9. Curran, Emma J. 2023. "Longtermism and the Complaints of Future People". Forthcoming in Essays on Longtermism, ed. H. Greaves, J. Barrett, and D. Thorstad. Oxford: OUP. Read section 1. Optional Thorstad (2023) "High risk, low reward: A challenge to the astronomical value of existential risk mitigation." Focus on sections 1-3. Curran, E. J. (2022). "Longtermism, Aggregation, and Catastrophic Risk" (GPI Working Paper 18-2022). Global Priorities Institute. Beckstead (2013) "On the Overwhelming Importance of Shaping the Far Future" Chapter 3 "Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it" 80,000 Hours podcast Frick (2015) "Contractualism and Social Risk" sections 7-8 Part 2: Philosophical problems Week 5: Fanaticism Core Bostrom, N. (2009). "Pascal's mugging." Analysis, 69 (3): 443-445. Russell, J. S. "On two arguments for fanaticism." Noûs, forthcoming. Read sections 1, 2.1, and 2.2. Temkin, L. S. (2022). "How Expected Utility Theory Can Drive Us Off the Rails." In L. S. ...
In this week's episode of the New Flesh Podcast, Ricky and Jon interview Jonathan Anomaly. Jonathan is the Academic Director of the Center for Philosophy, Politics, & Economics in Quito, Ecuador. Jonathan's book “Creating Future People: The Ethics of Genetic Enhancement” explores the social implications of emerging reproductive technologies, including embryo selection. Topics covered include; IVF screening, emerging reproductive technologies, embryo selection, the social and moral implications of gene editing, eugenics AND more. ---ARTICLES AND LINKS DISCUSSED---Jonathan Anomaly official website:https://jonathan-anomaly.com/---FOLLOW THE CONVERSATION ON reddit:https://www.reddit.com/r/thenewfleshpodcast/---SUPPORT THE NEW FLESHBuy Me A Coffee:https://www.buymeacoffee.com/thenewflesh---Instagram: @thenewfleshpodcast---Twitter: @TheNewFleshpod---Follow Ricky: @ricky_allpike on InstagramFollow Ricky: @NewfleshRicky on TwitterFollow Jon: @thejonastro on Instagram---Logo Design by Made To Move: @made.tomove on InstagramTheme Song: Dreamdrive "Vermilion Lips"
Johnny the Anomaly joins Spencer Case to argue that the potential benefits of genetic enhancement outweigh the risks (Spencer is skeptical).The electronic version of Anomaly's book, Creating Future People: The Ethics of Genetic Enhancement can be downloaded free at Amazon for Kindle or here:https://www.taylorfrancis.com/books/oa-mono/10.4324/9781003014805/creating-future-people-jonathan-anomaly?_gl=1*1ui59as*_ga*MjA2NDM2ODk5NS4xNjcxNzM5MTQ4*_ga_0HYE8YG0M6*MTY4MDE0OTA5My44LjAuMTY4MDE0OTA5My4wLjAuMA..
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting what future people value: A terse introduction to Axiological Futurism, published by Jim Buhler on March 24, 2023 on The Effective Altruism Forum. Why this is worth researching Humanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future? While this depends on many factors, a crucial one will likely be the values of our successors. Here's a position that might tempt us while considering whether it is worth researching this topic: Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can't predict. Therefore, trying to predict their values is a waste of time and resources. While I see how this can seem compelling, I think this is very ill-informed. First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn't seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix). Second, a scenario where the values of our descendants don't significantly differ from ours appears quite unlikely to me. We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are special enough to make us drop that prior. Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today. How to research this Here are examples of broad questions that could be part of a research agenda on this topic: What are the best predictors of future human values? What can we learn from usual forecasting methods? How have people's values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022) Are there reasons to think we'll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon? Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/or accelerated value drift? More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.) Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017) Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it? Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it? Are there futures – where humanity has certain values – that are unlikely but worth wagering on? Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2) What do/will aliens value (see my forthcoming next post) and what does that tell us about ourselves? John Danaher (2021) gives examples of methodologies that could be used to answer these questions. Also, my Appendix references examples and other relevant work, including the (forthcoming) next posts in this sequence. Acknowledgment Thanks to Anders Sandberg for pointing m...
Brandon visita el podcast para hablar de los álbumes que se hicieron parte de su personalidad, aquellos con los que fue y es asociado, al igual que esas canciones que le dan un empoderamiento. Discos de Belanova, Blood Orange, y Camera Obscura, así como "Bestia" de Hello! Seahorse y "Future People" de Alabama Shakes son solo una parte del catalogo musical que acompaña a Brandon en su vida.
What is going to happen with the Daily Show on Comedy Central? The network announced guest hosts as they figure it out once Trevor Noah steps down. Lizzo and Shania Twain won big at last night's People's Choice Awards. We played 5 Second Showdown and Would You Rather!
Listen in as Pastor Josh Coats continues our sermon series Creating Our Future. This week Josh preaches on People of The Word
If the human race lasts as long as a typical mammalian species and our population continues at its current size, then there are 80 trillion people yet to come. Oxford philosophy professor William MacAskill says it's up to us to protect them. In his bold new book, "What We Owe the Future," MacAskill makes a case for longtermism. He believes that how long we survive as a species may depend on the actions we take now. --- To hear the Book Bite for "What We Owe the Future," download the Next Big Idea app at nextbigideaclub.com/app
I find imagining future people looking back on present-day longtermism (the view that positively influencing the long-term future should be a key moral priority) a helpful intuition pump, especially re: a certain kind of “holy sh**” reaction to existential risk, and to the possible size and quality of the future at stake.Text version here.Edited for Joe Carlsmith by TYPE III AUDIO.
Future people matter. Agree or disagree? LONGTERMISM is a perspective or ethical stance which gives priority to improving the long-term future. How much ethical or moral responsibility do we as Christians and/or non-Christians have to the unborn generations? Tony Minear and Charity Gleason-Davis discuss.To watch the video podcast, click this link: Beatitudes Livestream Events | Ruminate on That!--------------------Next chance to join in LIVE: Sunday, August 28 @8:45a (MST)*music by Jon Lang
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritizing x-risks may require caring about future people, published by elifland on August 14, 2022 on The Effective Altruism Forum. Introduction Several recent popular posts (here, here, and here) have made the case that existential risks (x-risks) should be introduced without appealing to longtermism or the idea that future people have moral value. They tend to argue or imply that x-risks would still be justified as a priority without caring about future people. I felt intuitively skeptical of this claim and decided to stress-test it. In this post, I: Argue that prioritizing x-risks over near-term interventions and global catastrophic risks may require caring about future people. More Disambiguate connotations of “longtermism”, and suggest a strategy for introducing the priority of existential risks. More Review and respond to previous articles which mostly argued that longtermism wasn't necessary for prioritizing existential risks. More Prioritizing x-risks may require caring about future people I'll do some rough analyses on the value of x-risk interventions vs. (a) near-term interventions, such as global health and animal welfare and (b) global catastrophic risk (GCR) interventions, such as reducing risk of nuclear war. I assume a lack of caring about future people to test whether it's necessary for prioritizing x-risk above alternatives. My goal is to do a quick first pass, which I'd love for others to build on / challenge / improve! I find that without taking into account future people, x-risk interventions are approximately as cost-effective as near-term and GCR interventions. Therefore, strongly prioritizing x-risks may require caring about future people; otherwise, it depends on non-obvious claims about the tractability of x-risk reduction and the moral weights of animals. InterventionRough estimated cost-effectiveness, current lives only ($/human-life-equivalent-saved)General x-risk prevention (funding bar)$125 to $1,250AI x-risk prevention$375Animal welfare$450Bio x-risk prevention$1,000Nuclear war prevention$1,250GiveWell-style global health (e.g. bednet distribution)$4,500 Estimating the value of x-risk interventions This paper estimates that $250B would reduce biorisk by 1%. Taking Ord's estimate of 3% biorisk this century and a population of ~8 billion, we get: $250B / (8B .01 .03) = $104,167/life saved via biorisk interventions. The paper calls this a conservative estimate, so a more optimistic one might be 1-2 more OOMs as effective at ~$10,000 to ~$1,000 / life saved; let's take the optimistic end of $1,000 / life saved as a rough best guess, since work on bio x-risk likely also reduces the likelihood of deaths from below-existential pandemics and these seem substantially more likely than the most severe ones. For AI risk, 80,000 Hours estimated several years ago that another $100M/yr (for how long? let's say 30 years) can reduce AI risk by 1%; unclear if this percentage is absolute or relative, relative seems more reasonable to me. Let's again defer to Ord and assume 10% total AI risk. This gives: ($100M 30) / (8B .01 .1) = $375 / life saved. On the funding side, Linch has ideated a .01% Fund which would aim to reduce x-risks by .01% for $100M-$1B. This implies a cost-effectiveness of ($100M to $1B) / (8B .0001) = $125 to 1,250 / life saved. Comparing to near-term interventions GiveWell estimates it costs $4,500 to save a life through global health interventions. This post estimates that animal welfare interventions may be ~10x more effective, implying $450 / human life-equivalent, though this is an especially rough number. Comparing to GCR intervention Less obviously than near-term interventions, a potential issue with not caring about future people is over-prioritizing global catastrophic risks (that might kill a substantial percentage o...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Common-sense cases where "hypothetical future people" matter, published by levin on August 12, 2022 on The Effective Altruism Forum. Motivation A few weeks ago, I was speaking with a student (call him Josh) who was skeptical of the importance of existential risk and the validity of longtermism. He said something like, "What matters to me is kids going hungry today, not hypothetical future people." I found this to be a moving, sincere objection, so I thought about where it seemed to go wrong and offered Josh a version of Case 1 below, and he seemed pretty convinced. Josh's skepticism echoes the dismissal of "merely possible people" expressed by critics who hold presentist person-affecting views — that is, they believe that "an act can only be bad [or good] if it is bad [or good] for someone," where "someone" is a person who exists at the time of the act. The current non-existence of future people is a common objection to taking their well-being into moral consideration, and it would be good for longtermists to have cases ready to go that illustrate the weaknesses of this view. I developed a couple more cases in Twitter threads and figured I'd combine them into a linkable forum post. Case 1: The Reformer You work in a department of education. You spend a full year working on a report on a new kindergarten curriculum that makes kids happier and learn better. It takes a few years for this to circulate and get approved, and a few more for teachers to learn it. By the time it's being taught, 6 years have passed since your work. I think your work, 6 years ago, was morally significant because of the happier, better-educated students now. But these kindergarteners are (mostly) 5 years old. They didn't even exist at the time of your work. You remember a conversation you had, while working on the curriculum, with your friend who thinks that "hypothetical future people can't have interests" (and who is familiar with the turnaround times of education reform). The friend shook her head. "I don't know why you're working on this kindergarten curriculum for future people," she said. "You could be helping real people who are alive today. Why not switch to working on a second-grade curriculum?" Indeed, if only already-existing people matter, you'd be in the weird position where your work would've been morally valuable if you'd written a 2nd grade curriculum but your kindergarten curriculum is morally worthless. Why should the birth year of beneficiaries affect this evaluation? Case 2: The Climate Resiliency Project After finishing architecture school, you choose to work at a firm that designs climate resiliency projects. The government of Bangladesh has contracted that firm to design sea walls, on the condition that the work be expedited. You could have worked at a commercial firm for more pay and shorter hours, but you choose to work at the climate-focused firm. The team works for a year on the sea walls project. The Bangladeshi government builds it over the next 20 years. In 2042, a typhoon strikes, and the walls save thousands of lives. Now, you consider how your choice to work at the climate resiliency firm compared to its alternatives. You think your work on the sea walls accounted for, say, 1% of the impact, saving dozens of lives. But maybe you could have donated a big share of your larger salary to Against Malaria and saved dozens of lives that way instead. If "nonexistent future people" don't matter, we are again in the absurd position of asking, "Well, how many of the lives saved were over the age of 20?" After all, those under 20 didn't exist yet, so you should not have taken their non-existent interests into consideration. As the decades progress, the sea walls save more lives, as the effects of climate change get worse. But the "future people don't matter" view holds that ...
This week Pastor Jeff closed out our sermon series, Future, by discussing how the church needs to focus on people, not programs, and the need for the church to serve its local needs. The Scripture reading came from Acts 16:1-15.
Enjoy!
A talk given by Emma Curran (Cambridge) at the Moral Sciences Club on 8th March 2022.
When we were children it was easier to believe and imagine that the improbable and impossible could happen. Then life happened. And for many of us: “then this year happened.” A primary theme of scripture is a demonstration of God's commitment to redeem, restore, and renew the things in our lives that have been hurt, damaged, and broken. The promise of a new day awaits! I've accepted Jesus • https://willamette.cc/follow-christ • Let us know you made this decision, and we'll help you discover what's next! Get Baptized • https://willamette.cc/baptisms • If you have accepted Jesus but have never had the opportunity to celebrate your new life in Him through water baptism, this is your next step! Find Community • https://willamette.cc/community • Growth happens best in community. Community Life starts this week! Let us pray with you • https://willamette.cc/prayer • Fill out a quick form to let us know how we can be praying for you. Sermon Notes - September 26th, 2021It's A New Day (Laugh again) - Brian Becker Psalm 126:1-3 (MSG)It seemed like a dream, too good to be true, when God returned Zion's exiles. We laughed, we sang, we couldn't believe our good fortune. We were the talk of the nations: “God was wonderful to them!” God was wonderful to us; we are one happy people. Psalm 126:4-6 (MSG)And now, God, do it again — bring rains to our drought-stricken lives So those who planted their crops in despair will shout “Yes!” at the harvest, So those who went off with heavy hearts will come home laughing, with armloads of blessing. “God gets his family back.” -Philip Yancey Believe AgainLaugh AgainPray AgainSing AgainDream Again …We laughed, we sang, we couldn't believe our good fortune……And now God do it again…So those who went off with heavy hearts will come home laughing…Proverbs 17:22 (NLT)A cheerful heart is good medicine, but a broken spirit saps a person's strength. Laughter: - relaxes the whole body- boosts the immune system- triggers the release of endorphins- protects the heart- lightens anger's heavy load Remembered what God had done (Past)Identified what God was doing (Present)Believed what God would do (Future) People with pain were asked to party…not to forget life, but to remember God. Exodus 23: 14-17a (NLT)“Each year you must celebrate three festivals in my honor. First, celebrate the Festival of Unleavened Bread. For seven days the bread you eat must be made without yeast, just as I commanded you. Celebrate this festival annually at the appointed time in early spring… for that is the anniversary of your departure from Egypt… Psalm 126:1-3 (MSG)We laughed, we sang, we couldn't believe our good fortune. We were the talk of the nations: “God was wonderful to them!” God was wonderful to us; we are one happy people. The God who brought you here will not leave you here. Exodus 23:16a (NLT)“Second, celebrate the Festival of Harvest, when you bring me the first crops of your harvest.Gratefulness is best expressed through generosity to God and others. 2 Corinthians 9:7b (NIV)…God loves a cheerful giver.Exodus 23:16b-17 (NLT)“Finally, celebrate the Festival of the Final Harvest at the end of the harvest season, when you have harvested all the crops from your fields. At these three times each year, every man in Israel must appear before the Sovereign Lord. The people of God, feel the joy of God when they celebrate and place their faith and future in God. TOGETHER WE MUST…Remember what God has done (Past)Identify what God is doing (Present)Believe what God will do (Future)
God's complete Kingdom that is coming in the future overlaps with the present reality of the world. In the midst of this overlap, we are called to live out the future in the present life that in reality, falls short of God's Kingdom vision.
God's complete Kingdom that is coming in the future overlaps with the present reality of the world. In the midst of this overlap, we are called to live out the future in the present life that in reality, falls short of God's Kingdom vision.
Jesus followers are meant to be like time travelers from the future, living the life of tomorrow today – on Earth as in Heaven. Join Meghan Good, Jesus Collective Partner and Teaching Pastor at Mennonite Trinity Church, as we explore what it means to be future people living a life of radical love and hope. (This was part of a three-week teaching series called On Earth As In Heaven produced in partnership with The Meeting House Church.)
Paul was an influencer. Let's look how He teaches us to be influencers in faith to the world around us. No matter what we are facing.
Why what we believe about how the world ends impacts us now.
A talk about how faith is not the opposite of doubt.
How flourishing could be found along the outermost edge of faith.
Maybe the antidote to burnout and apathy is love.
Ramin Nazer is Episode 1. Wow, Yay, The Cosmic Nod has been born! We talk about all kinds of stuff and he makes me laugh A LOT. If you made a drinking game out of me laughing in this episode, you would DIE. So don't do that. My dogs interrupt us. We talk about legacy, awkwardly running into friends at Trader Joes, the dissonance between our true selves and how we THINK we are perceived and lots more. Ramin Nazer is an artist and comedian based in Los Angeles. He is the author of After You Die, Cave Paintings for Future People, and Infinite Elephants. He was named one of the New Faces of Comedy at the Just For Laughs festival in Montreal and has performed stand-up on the The Late Late Show on CBS. His debut album You Were Good Too is available for streaming everywhere along with his weekly podcast Rainbow Brainskull on the Mind Pod Network. To learn more about Ramin, visit www.raminnazer.com. follow him on twitter and instagram at @raminnazer