Podcasts about crosspost

  • 58PODCASTS
  • 123EPISODES
  • 43mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jun 20, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about crosspost

Latest podcast episodes about crosspost

The Nonlinear Library
EA - Against the Guardian's hit piece on Manifest by Omnizoid

The Nonlinear Library

Play Episode Listen Later Jun 20, 2024 6:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against the Guardian's hit piece on Manifest, published by Omnizoid on June 20, 2024 on The Effective Altruism Forum. Crosspost of this on my blog The Guardian recently released the newest edition in the smear rationalists and effective altruists series, this time targetting the Manifest conference. The piece titled "Sam Bankman-Fried funded a group with racist ties. FTX wants its $5m back," is filled with bizarre factual errors, one of which was so egregious that it merited a connection. It's the standard sort of journalist hitpiece on a group: find a bunch of members saying things that sound bad, and then sneeringly report on that as if that discredits the group. It reports, for example, that Scott Alexander attended the conference, and links to the dishonest New York Times smear piece criticizing Scott, as well as a similar hitpiece calling Robin Hanson creepy. It then smears Razib Khan, on the grounds that he once wrote a piece for magazines that are Paleoconservative and anti-immigration (like around half the country). The charges against Steve Hsu are the most embarrassing - they can't even find something bad that he did, so they just mention half-heartedly that there were protests against him. And it just continues like this - Manifest invited X person who has said a bad thing once, or is friends with a bad person, or has written for some nefarious group. If you haven't seen it, I'd recommend checking out Austin's response. I'm not going to go through and defend each of these people in detail, because I think that's a lame waste of time. I want to make a more meta point: articles like this are embarrassing and people should be ashamed of themselves for writing them. Most people have some problematic views. Corner people in a dark alleyway and start asking them why it's okay to kill animals for food and not people (as I've done many times), and about half the time they'll suggest it would be okay to kill mentally disabled orphans. Ask people about why one would be required to save children from a pond but not to give to effective charities, and a sizeable portion of the time, people will suggest that one wouldn't have an obligation to wade into a pond to save drowning African children. Ask people about population ethics, and people will start rooting for a nuclear holocaust. Many people think their worldview doesn't commit them to anything strange or repugnant. They only have the luxury of thinking this because they haven't thought hard about anything. Inevitably, if one thinks hard about morality - or most topics - in any detail, they'll have to accept all sorts of very unsavory implications. In philosophy, there are all sorts of impossibility proofs, showing that we must give up on at least one of a few widely shared intuitions. Take the accusations against Jonathan Anomaly, for instance. He was smeared for supporting what's known as liberal eugenics - gene editing to make people smarter or make sure they don't get horrible diseases. Why is this supposed to be bad? Sure, it has a nasty word in the name, but what's actually bad about it? A lot of people who think carefully about the subject will come to the same conclusions as Jonathan Anomaly, because there isn't anything objectionable about gene editing to make people better off. If you're a conformist who bases your opinion about so called liberal eugenics ( terrible term for it) on the fact that it's a scary term, you'll find Anomaly's position unreasonable, but if you actually think it through, it's extremely plausible, and is even agreed with by most philosophers. Should philosophy conferences be disbanded because too many philosophers have offensive views? I've elsewhere remarked that cancel culture is a tax on being interesting. Anyone who says a lot of things and isn't completely beholden to social co...

Trek Wars
CROSSPOST: Intergalactic - Babylon 5: The Road Home

Trek Wars

Play Episode Listen Later May 24, 2024 49:27


Thanks to Mike Moody-Garcia for letting Kenny guest on his pod!We're back with another ESSENTIAL BABYLON 5 episode. Kenny Madison from the ⁠Trek Wars podcast⁠ joins Mike to discuss how Babylon 5: The Road Home is very, very good for the franchise. Despite the movie not blowing them away, they feel it can only help inspire more people to watch Babylon 5 and, fingers crossed, pave the way for either an animated or live action series reboot. Enjoy the pod!Chapters0:00:00 Introducing the podcast and topic of discussion0:02:21 Discussion of previous Babylon 5 content and fandom history0:08:06 Rumors of Babylon 5 Reboot0:14:33 A Love Letter to Fans, But Not a Great Movie0:19:14 Rebooting the World: Exciting Possibilities for the Future0:25:33 Zathras love and hopes for a B5 reboot0:31:36 Babylon 5 Can Be A Taste for the Acquired Palate0:41:32 Podcast and Comedy UpdatesFollow Intergalactic:Instagram: ⁠⁠@Intergalacticpod⁠⁠Threads: ⁠⁠@Intergalacticpod⁠⁠⁠⁠⁠Intergalacticpod.co⁠⁠⁠Follow Mike:Instagram: ⁠⁠⁠@mikemoodygarcia⁠⁠⁠Threads: ⁠⁠⁠@mikemoodygarcia⁠⁠⁠

The Inside View
[Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)

The Inside View

Play Episode Listen Later May 17, 2024 136:08


This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the end I also have a discussion with Nathan Labenz about his takes on AI. Adam Gleave is the founder of Far AI, and with Nathan they discuss finding vulnerabilities in GPT-4's fine-tuning and Assistant PIs, Far AI's work exposing exploitable flaws in "superhuman" Go AIs through innovative adversarial strategies, accidental jailbreaking by naive developers during fine-tuning, and more. OUTLINE (00:00) Intro (02:57) NATHAN INTERVIEWS ADAM GLEAVE: FAR.AI's Mission (05:33) Unveiling the Vulnerabilities in GPT-4's Fine Tuning and Assistance APIs (11:48) Divergence Between The Growth Of System Capability And The Improvement Of Control (13:15) Finding Substantial Vulnerabilities (14:55) Exploiting GPT 4 APIs: Accidentally jailbreaking a model (18:51) On Fine Tuned Attacks and Targeted Misinformation (24:32) Malicious Code Generation (27:12) Discovering Private Emails (29:46) Harmful Assistants (33:56) Hijacking the Assistant Based on the Knowledge Base (36:41) The Ethical Dilemma of AI Vulnerability Disclosure (46:34) Exploring AI's Ethical Boundaries and Industry Standards (47:47) The Dangers of AI in Unregulated Applications (49:30) AI Safety Across Different Domains (51:09) Strategies for Enhancing AI Safety and Responsibility (52:58) Taxonomy of Affordances and Minimal Best Practices for Application Developers (57:21) Open Source in AI Safety and Ethics (1:02:20) Vulnerabilities of Superhuman Go playing AIs (1:23:28) Variation on AlphaZero Style Self-Play (1:31:37) The Future of AI: Scaling Laws and Adversarial Robustness (1:37:21) MICHAEL TRAZZI INTERVIEWS NATHAN LABENZ (1:37:33) Nathan's background (01:39:44) Where does Nathan fall in the Eliezer to Kurzweil spectrum (01:47:52) AI in biology could spiral out of control (01:56:20) Bioweapons (02:01:10) Adoption Accelerationist, Hyperscaling Pauser (02:06:26) Current Harms vs. Future Harms, risk tolerance  (02:11:58) Jailbreaks, Nathan's experiments with Claude The cognitive revolution: https://www.cognitiverevolution.ai/ Exploiting Novel GPT-4 APIs: https://far.ai/publication/pelrine2023novelapis/ Advesarial Policies Beat Superhuman Go AIs: https://far.ai/publication/wang2022adversarial/

Start100K
36 - Crosspost from the Financial Feels Podcast

Start100K

Play Episode Listen Later May 15, 2024 22:55


This was an episode I recorded on the Financial Feels podcast. I had a great conversation with Melissa Mazard! We talked about my childhood, joining the military, my relationship with money and my wife, and many other deep and interesting topics around money. I hope you like it!

Le Super Daily
Youpi c'est lundi et Sora balance le premier clip vidéo 100% IA !

Le Super Daily

Play Episode Listen Later May 13, 2024 16:08


Épisode 1140 : Comme tous les lundis, on vous balance les meilleures infos social media de la semaine : Crosspost possible sur Instagram et ThreadsJack Dorsey quitte BlueSkyYoutube boost son créateur vidéo avec de l'IAComment devenir Top Voices sur LinkedinSora balance le premier clip vidéo de 4 minutes en 100% IA ! Retrouvez toutes les notes de l'épisode sur www.lesuperdaily.com ! . . . Le Super Daily est le podcast quotidien sur les réseaux sociaux. Il est fabriqué avec une pluie d'amour par les équipes de Supernatifs. Nous sommes une agence social media basée à Lyon : https://supernatifs.com. Ensemble, nous aidons les entreprises à créer des relations durables et rentables avec leurs audiences. Ensemble, nous inventons, produisons et diffusons des contenus qui engagent vos collaborateurs, vos prospects et vos consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

The Nonlinear Library
LW - Losing Faith In Contrarianism by omnizoid

The Nonlinear Library

Play Episode Listen Later Apr 26, 2024 7:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Losing Faith In Contrarianism, published by omnizoid on April 26, 2024 on LessWrong. Crosspost from my blog. If you spend a lot of time in the blogosphere, you'll find a great deal of people expressing contrarian views. If you hang out in the circles that I do, you'll probably have heard of Yudkowsky say that dieting doesn't really work, Guzey say that sleep is overrated, Hanson argue that medicine doesn't improve health, various people argue for the lab leak, others argue for hereditarianism, Caplan argue that mental illness is mostly just aberrant preferences and education doesn't work, and various other people expressing contrarian views. Often, very smart people - like Robin Hanson - will write long posts defending these views, other people will have criticisms, and it will all be such a tangled mess that you don't really know what to think about them. For a while, I took a lot of these contrarian views pretty seriously. If I'd had to bet 6-months ago, I'd have bet on the lab leak, at maybe 2 to 1 odds. I'd have had significant credence in Hanson's view that healthcare doesn't improve health until pretty recently, when Scott released his post explaining why it is wrong. Over time, though, I've become much less sympathetic to these contrarian views. It's become increasingly obvious that the things that make them catch on are unrelated to their truth. People like being provocative and tearing down sacred cows - as a result, when a smart articulate person comes along defending some contrarian view - perhaps one claiming that something we think is valuable is really worthless - the view spreads like wildfire, even if it's pretty implausible. Sam Atis has an article titled The Case Against Public Intellectuals. He starts it by noting a surprising fact: lots of his friends think education has no benefits. This isn't because they've done a thorough investigation of the literature - it's because they've read Bryan Caplan's book arguing for that thesis. Atis notes that there's a literature review finding that education has significant benefits, yet it's written by boring academics, so no one has read it. Everyone wants to read the contrarians who criticize education - no one wants to read the boring lit reviews that say what we believed about education all along is right. Sam is right, yet I think he understates the problem. There are various topics where arguing for one side of them is inherently interesting, yet arguing for the other side is boring. There are a lot of people who read Austian economics blogs, yet no one reads (or writes) anti-Austrian economics blogs. That's because there are a lot of fans of Austrians economics - people who are willing to read blogs on the subject - but almost no one who is really invested in Austrian economics being wrong. So as a result, in general, the structural incentives of the blogosphere favor being a contrarian. Thus, you should expect the sense of the debate you get, unless you peruse the academic literature in depth surrounding some topic, to be wildly skewed towards contrarian views. And I think this is exactly what we observe. I've seen the contrarians be wrong over and over again - and this is what really made me lose faith in them. Whenever I looked more into a topic, whenever I got to the bottom of the full debate, it always seemed like the contrarian case fell apart. It's easy for contrarians to portray their opponents as the kind of milquetoast bureaucrats who aren't very smart and follow the consensus just because it is the consensus. If Bryan Caplan has a disagreement with a random administrator, I trust that Bryan Caplan's probably right, because he's smarter and cares more about ideas. But what I've come to realize is that the mainstream view that's supported by most of the academics tends to be supported by some r...

The Nonlinear Library: LessWrong
LW - Losing Faith In Contrarianism by omnizoid

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 26, 2024 7:55


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Losing Faith In Contrarianism, published by omnizoid on April 26, 2024 on LessWrong. Crosspost from my blog. If you spend a lot of time in the blogosphere, you'll find a great deal of people expressing contrarian views. If you hang out in the circles that I do, you'll probably have heard of Yudkowsky say that dieting doesn't really work, Guzey say that sleep is overrated, Hanson argue that medicine doesn't improve health, various people argue for the lab leak, others argue for hereditarianism, Caplan argue that mental illness is mostly just aberrant preferences and education doesn't work, and various other people expressing contrarian views. Often, very smart people - like Robin Hanson - will write long posts defending these views, other people will have criticisms, and it will all be such a tangled mess that you don't really know what to think about them. For a while, I took a lot of these contrarian views pretty seriously. If I'd had to bet 6-months ago, I'd have bet on the lab leak, at maybe 2 to 1 odds. I'd have had significant credence in Hanson's view that healthcare doesn't improve health until pretty recently, when Scott released his post explaining why it is wrong. Over time, though, I've become much less sympathetic to these contrarian views. It's become increasingly obvious that the things that make them catch on are unrelated to their truth. People like being provocative and tearing down sacred cows - as a result, when a smart articulate person comes along defending some contrarian view - perhaps one claiming that something we think is valuable is really worthless - the view spreads like wildfire, even if it's pretty implausible. Sam Atis has an article titled The Case Against Public Intellectuals. He starts it by noting a surprising fact: lots of his friends think education has no benefits. This isn't because they've done a thorough investigation of the literature - it's because they've read Bryan Caplan's book arguing for that thesis. Atis notes that there's a literature review finding that education has significant benefits, yet it's written by boring academics, so no one has read it. Everyone wants to read the contrarians who criticize education - no one wants to read the boring lit reviews that say what we believed about education all along is right. Sam is right, yet I think he understates the problem. There are various topics where arguing for one side of them is inherently interesting, yet arguing for the other side is boring. There are a lot of people who read Austian economics blogs, yet no one reads (or writes) anti-Austrian economics blogs. That's because there are a lot of fans of Austrians economics - people who are willing to read blogs on the subject - but almost no one who is really invested in Austrian economics being wrong. So as a result, in general, the structural incentives of the blogosphere favor being a contrarian. Thus, you should expect the sense of the debate you get, unless you peruse the academic literature in depth surrounding some topic, to be wildly skewed towards contrarian views. And I think this is exactly what we observe. I've seen the contrarians be wrong over and over again - and this is what really made me lose faith in them. Whenever I looked more into a topic, whenever I got to the bottom of the full debate, it always seemed like the contrarian case fell apart. It's easy for contrarians to portray their opponents as the kind of milquetoast bureaucrats who aren't very smart and follow the consensus just because it is the consensus. If Bryan Caplan has a disagreement with a random administrator, I trust that Bryan Caplan's probably right, because he's smarter and cares more about ideas. But what I've come to realize is that the mainstream view that's supported by most of the academics tends to be supported by some r...

The Nonlinear Library
EA - If You're Going To Eat Animals, Eat Beef and Dairy by Omnizoid

The Nonlinear Library

Play Episode Listen Later Apr 23, 2024 3:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If You're Going To Eat Animals, Eat Beef and Dairy, published by Omnizoid on April 23, 2024 on The Effective Altruism Forum. Crosspost of my blog. You shouldn't eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we'd confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger is worth that kind of cruelty. However, not all animals are the same. Contra Napoleon in Animal Farm, all animals are not equal. Cows are big. The average person eats 2400 chickens but only 11 cows in their life. That's mostly because chickens are so many times smaller than cows, so you can only get so many chicken sandwiches out of a single chicken. But how much worse is chicken than cow? Brian Tomasik devised a helpful suffering calculator chart. It has various columns - one for how sentient you think the animals are, compared to humans, one for how long the animals lives, etc. You can change the numbers around if you want. I changed the sentience numbers to accord with the results of the most detailed report on the subject (for the animals they didn't sample, I just compared similar animals), done by Rethink Priorities: When I did that, I got the following: Rather than, as the original chart did, setting cows = 1 for the sentience threshold, I set humans = 1 for it. So therefore you should think in terms of the suffering caused as roughly equivalent to the suffering caused if you locked a severely mentally enfeebled person or baby in a factory farm and tormented them for that number of days. Dairy turns out not that bad compared to the rest - a kg of dairy is only equivalent to torturing a baby for about 70 minutes in terms of suffering caused. That means if you get a gallon of milk, that's only equivalent to confining and tormenting a baby for about 4 and a half hours. That's positively humane compared to the rest! Now I know people will object that human suffering is much worse than animal suffering. But this is totally unjustified. Making a human feel pain is generally worse because we feel pain more intensely, but in this case, we're analyzing how bad a unit of pain is. If the amount of suffering is the same, it's not clear what about animals is supposed to make their suffering so monumentally unimportant. Their feathers? Their lack of mental acuity? We controlled for that by having the comparison be a baby or a severely mentally disabled person (babies are dumb, wholly unable to do advanced mathematics). Ultimately, thinking animal pain doesn't matter much is just unjustified speciesism, wherein one takes an obviously intrinsically morally irrelevant feature like species to determine moral worth. Just like racism and sexism, speciesism is wholly indefensible - it places moral significance on a totally morally insignificant category. Even if you reject this, the chart should still inform your eating decisions. As long as you think animal suffering is bad, the chart is informative. Some kinds of animal products cause a lot more suffering than others - you should avoid the ones that cause more suffering. Dairy, for instance, causes over 800 times less suffering than chicken and over 1000 times less than eggs. Drinking a gallon of milk a day for a year is then about as bad as having a chicken sandwich once every four months. Chicken is then really really bad - way worse than most other things. Dairy and beef mostly aren't a big deal in comparison. And you can play around the numbers if you disagree with them - whatever answer you come to should be informative. I remember seeing this chart was instrumental in my going vegan. I realized that each time I have a chicken sandwich, animals have to suffer in darkness, feces, filth, and misery for weeks on end. That's not worth a ...

The Writers’ Co-op
Crosspost! Conversations, Not Confrontations: Learning the Art of Negotiation with Wudan Yan

The Writers’ Co-op

Play Episode Listen Later Feb 6, 2024 45:46


This episode is a cross-post between The Writers' Co-op and Freelance Cake, a podcast for ambitious freelancers who want to get more results with less effort, hosted by Austin L. Church. Austin had Wudan on his show to talk about how she cultivated a mindset of 'always be negotiating,' and how she got to a place where negotiations felt comfortable and conversation-like, rather than potentially contentious.

Cognitive Dissidents
Trouble in the Red Sea, logistics snarls, China, and the U.S. election

Cognitive Dissidents

Play Episode Listen Later Jan 22, 2024 23:56


Crosspost with Sean Hanney's Frontlines from RealAgriculture!Up for discussion in this episode:Skirmish in the Red Sea — the U.S. is working to stop Iran-backed Houthi attacks in the shipping laneLogistics are definitely impacted, but is it as big a deal as it could be?There's many players in the mix, and much to unpack on this oneTaiwan's election is done. Does it make relations with China better or worse and why does it matter?Speaking of elections, what about Argentina?China/Argentina tradeOn to America! The U.S. election set for November is going to be a wild rideIt doesn't matter who wins, either way, it's going to have a huge impact on geopoliticsThe outcome of the U.S. election will be a major factor in shaping geopolitical instability and volatility in the coming year, Shapiro says-- CI Site: cognitive.investmentsJacob Site: jacobshapiro.comJacob Twitter: x.com/JacobShapSubscribe to the Newsletter: bit.ly/weekly-sitrep--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Disclaimer: Cognitive Investments LLC (“Cognitive Investments”) is a registered investment advisor. Advisory services are only offered to clients or prospective clients where Cognitive Investments and its representatives are properly licensed or exempt from licensure.The information provided is for educational and informational purposes only and does not constitute investment advice and it should not be relied on as such. It should not be considered a solicitation to buy or an offer to sell a security. It does not take into account any investor's particular investment objectives, strategies, tax status or investment horizon. You should consult your attorney or tax advisorThis podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

Storytime
r/EntitledParents WHY I HATE MY MOTHER! - Reddit Stories

Storytime

Play Episode Listen Later Dec 23, 2023 30:19


Reddit rSlash Storytime r entitledparents Karen tells a teen to stop playing with his 6 YEAR OLD TWIN BROTHERS **My mother got rude with a friend of mine and wonders why we won't help her **I loathe my parents to the point where I want to hurt them **[Crosspost from /AmITheJerk] Entitled mother demands her toddler use an extra seat that OP paid for **For around 20 minutes, a toddler was hitting the tip of their fork against their plate at a sit down restaurant **my dad made me spend my own money on groceries My step dad is so anti internet he has comvinced my mom that video games are like drugs and have completly isolated me from my friends Parents living in my home for the last 5 years is over. **The moment I realized I wasnt a little kid anymore Hosted on Acast. See acast.com/privacy for more information.

The Movie Ladder Podcast
The Night Before (Pretty, Pretty, Pretty Good Podcast CROSSPOST)

The Movie Ladder Podcast

Play Episode Listen Later Dec 22, 2023 121:36


As a special Jewish Christmas present, we're giving you a bonus crosspost podcast of Zach's January 2023 appearance on Pretty, Pretty, Pretty Good/Pretty Good Friends to talk about the Christmas classic THE NIGHT BEFORE. You can stream THE NIGHT BEFORE on Tubi or for streaming rental and you should watch it. Enjoy the bonus episode and we'll see you next week! --- Send in a voice message: https://podcasters.spotify.com/pod/show/the-movie-ladder-podcast/message

Shonen Flop
(Crosspost) Anime Out of Context - Episode 257 - Chainsaw Man

Shonen Flop

Play Episode Listen Later Nov 27, 2023 92:07


David's getting married so instead of a regular episode we're posting our appearance on Anime out of Context where we discuss the Chainsaw Man anime!  Episode description: This week, Shaun & Remington are joined by the lovely David & Jordan from Shonen Flop to discuss the first 5 episodes of Chainsaw Man! Be sure to check out ShonenFlop.com for more of David and Jordan as well as their ever-expanding list of all-star guests! If you'd like to give us feedback, ask a question, or correct a mistake, send an email to AnimeOutOfContext@gmail.com or tweet at us @AnimeConPod.  Visit our Patreon at patreon.com/AnimeoutofContext if you would like to contribute to the show and get bonus content ranging from clips from our pre-episode banter, bonus episodes (including the 12 days of April Fools), our prototype Episode 0, to even getting shoutouts in the show! Intro and Outro are trimmed from "Remiga Impulse" by Jens Kiilstofte, licensed by MachinimaSound to Anime Out of Context under CC BY-NC-ND 4.0 which the licensor has modified for the licensee to allow reproduction and sharing of the Adapted Material for Commercial purposes

Lost Terminal
Announcements & Modem Prometheus "Tulpa"

Lost Terminal

Play Episode Listen Later Oct 31, 2023 28:38


PATRONS OF ALL LEVELS CAN LISTEN TO 14.1 NOW: https://www.patreon.com/posts/92009349A Halloween treat: A quadruple bill of 41s, TPC, Dragonmeet, and MP!1. 41 SOUTH BOOKJust in case you missed it, 41 South the PDF is available here (and it's not Amazon!):https://www.blurb.co.uk/b/11650975-41-southAnd if you'd like a physical copy, then here are a couple of Amazon links:https://amzn.eu/d/6rCjAHJhttps://a.co/d/1qlP43nOrders seem to be being fulfilled fairly quickly now, thanks to those of you who've already made a purchase -- thank you so much!And finally, if you could write a review on Amazon, Robert & I would be extremely grateful, this really helps us out.2. THE PHOSPHENE CATALOGUEPilot coming soon, I'll cross-post it into the LT feed, so you don't miss it!The website's under construction, but take a sneak peek: https://phosphenecatalogue.com/3. DRAGONMEETIf you're in London on the 2nd of December, come along and say hi!Tix @ https://www.dragonmeet.co.uk4. MODEM PROMETHEUSThe rest of this is the full, spooky latest episode called "Tulpa". Episode transcript: https://www.patreon.com/posts/episode-2-8-91599724If you like what you hear, there's nearly 2 full seasons waiting for you, available wherever you get your podcasts, or https://www.spreaker.com/show/modem-prometheus

Cognitive Dissidents
Fraying World Order - Intrigue Outloud Crosspost

Cognitive Dissidents

Play Episode Listen Later Oct 14, 2023 47:10


On today's Intrigue Outloud, Ethan is joined by Intrigue co-founder John Fowler and Jacob Shapiro, Partner and Director of Geopolitical Analysis at Cognitive Investments, to break down the latest from the conflict in Israel and Gaza, and to consider what the fighting tells us about the state of the world order. Thanks to our sponsor, Millennium Space Systems.--CI Site: cognitive.investmentsJacob Site: jacobshapiro.comJacob Twitter: x.com/JacobShapSubscribe to the Newsletter: bit.ly/weekly-sitrep--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Disclaimer: Cognitive Investments LLC (“Cognitive Investments”) is a registered investment advisor. Advisory services are only offered to clients or prospective clients where Cognitive Investments and its representatives are properly licensed or exempt from licensure.The information provided is for educational and informational purposes only and does not constitute investment advice and it should not be relied on as such. It should not be considered a solicitation to buy or an offer to sell a security. It does not take into account any investor's particular investment objectives, strategies, tax status or investment horizon. You should consult your attorney or tax advisorThis podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

Effective Altruism Forum Podcast
[Linkpost] “Pause For Thought: The AI Pause Debate” by Scott Alexander

Effective Altruism Forum Podcast

Play Episode Listen Later Oct 10, 2023 26:49


Crosspost from Astral Codex Ten. I.Last month, Ben West of the Center for Effective Altruism hosted a debate among long-termists, forecasters, and x-risk activists about pausing AI.Everyone involved thought AI was dangerous and might even destroy the world, so you might expect a pause - maybe even a full stop - would be a no-brainer. It wasn't. Participants couldn't agree on basics of what they meant by “pause”, whether it was possible, or whether it would make things better or worse.There was at least some agreement on what a successful pause would have to entail. Participating governments would ban “frontier AI models”, for example models using more training compute than GPT-4. Smaller models, or novel uses of new models would be fine, or else face an FDA-like regulatory agency. States would enforce the ban against domestic companies by monitoring high-performance microchips; they would enforce it against non-participating governments by banning [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: October 10th, 2023 Source: https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate Linkpost URL:https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate --- Narrated by TYPE III AUDIO.

Cognitive Dissidents
Agriculture x Geopolitics (RealAgriculture Crosspost)

Cognitive Dissidents

Play Episode Listen Later Oct 2, 2023 37:09


Jacob Shapiro and Shaun Haney unpack the India-Canada diplomatic nightmare of the last few weeks and how the Ukraine/Russia war might end.Link to RealAg Post: https://www.realagriculture.com/2023/09/frontlines-national-interests-vs-ideology-indian-separatism-and-western-support-for-ukraine/--CI LinkedIn: https://www.linkedin.com/company/cognitive-investments/CI Website: https://cognitive.investmentsCI Twitter: https://twitter.com/CognitiveInvestJacob LinkedIn: https://www.linkedin.com/in/jacob-l-s-a9337416/Jacob Twitter: https://twitter.com/JacobShapSubscribe to the Newsletter: https://investments.us17.list-manage.com/subscribe?u=156086d89c91a42d264546df7&id=4e31ca1340--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Referenced In The Show:--Disclaimer: Nothing discussed on Cognitive Dissidents should be considered as investment advice. Please always do your own research & speak to a financial advisor before putting your money into the markets.This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

newsletter geopolitics ukraine russia chartable crosspost shaun haney realagriculture cognitive investments cognitive dissidents
The Nonlinear Library
EA - The Bulwark's Article On Effective Altruism Is a Short Circuit by Omnizoid

The Nonlinear Library

Play Episode Listen Later Sep 27, 2023 14:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Bulwark's Article On Effective Altruism Is a Short Circuit, published by Omnizoid on September 27, 2023 on The Effective Altruism Forum. Crosspost of this on my blog. I really like the Bulwark. Their articles are consistently funny, well-written, and sensible. But recently, Mary Townsend wrote a critique of effective altruism titled "Effective Altruism Is a Short Circuit," that seems deeply confused. In fact, I will go further and make a stronger claim - there is not a single argument made in the entire article that should cause anyone to be less sympathetic to effective altruism at all! Every claim in the article is either true but irrelevant to the content of effective altruism or false. The article is crafted in many ways to mislead, confuse and induce negative affect in the reader but is light on anything of substance. For instance, the article begins with a foreboding picture of the notorious EA fraudster Sam Bankman Fried. This is not an explicit argument given, of course - it's just a picture. If it were forced to be an argument, it would not succeed - even if Bernie Madoff gave a lot of money to the Red Cross and has some role in planning operations, that would do nothing to discredit the Red Cross; the same principle is true of EA. But when one is writing a smear piece, they don't need to include real objections - they can just include things that induce disdain in the reader that they come to associate with the object of the author's criticism. Such is reminiscent of the flashing red letters that are ubiquitous in attack ads - good if one's aim is propaganda, bad if one's aim is truth. The article spends the first few paragraphs on mostly unremarkable philosophical musings about how we often have an urge to do good and we can choose what we do, filled with sophisticated-sounding references to philosophers and literature. Such musings help build the author's ethos as a Very Serious Person, but does little to provide an argument. However, after a few paragraphs of this, the author gets to the first real criticism: That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. As a simple response to the stipulation of a dreadful but equally simple freedom, it seems almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects. Effective altruism doesn't claim, like those who endorsed indulgences, that one can become good through donating. It claims that one can do good through donating and that one should do good. The second half of that claim is a trivially obvious moral claim - we should help people more rather than less - and the first half of the claim is backed by quite overwhelming empirical evidence. While one can dispute the details somewhat, the claim that we can save the lives of faraway people for a few thousand dollars is incontrovertible given the weight of the available evidence - there's a reason that critics of EA never have specific criticisms of the empirical claims made by effective altruists. Once one acknowledges that those who give to effective charities can save hundreds of lives over the course of their lives by fairly modest donations, a claim that even critics of such giving generally do not dispute, the claim that one should donate significant amounts to charities in order to save the lives of lots of people who would otherwise have died of horrifying diseases ceases to be something that "raises our post-reformation suspicions." One imagines the following dialogue (between Townsend and a starving child): Child: Please, could I have five dollars. This would allow me to afford food today so I wouldn't be hungry. Townsend: Sorry, I'd love to help, but that one could become good through monetary transactions should raise our post-Reformation suspici...

The Nonlinear Library
LW - Eugenics Performed By A Blind, Idiot God by omnizoid

The Nonlinear Library

Play Episode Listen Later Sep 18, 2023 3:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eugenics Performed By A Blind, Idiot God, published by omnizoid on September 18, 2023 on LessWrong. (Azathoth, Lovecraft's famous blind, amoral, idiot God.) Crosspost of this. I'm hugely in favor of gene editing and other actions that would improve the welfare of future people. If we could perform genetic engineering that made future people smarter, happier, and less likely to get diseases, I'd be in favor of it. This assumption is controversial. Many people think there's something immoral about changing the genes of humans, even in ways that are expected to improve their quality of life. They think that such an idea sounds too much like the clearly objectionable coercive eugenics of the Nazis, for instance. But you know what would be almost as bad as eugenics performed by Nazis - eugenics performed by a totally random, amoral selector. This eugenicist wouldn't have cruel ideas about Aryan superiority, for instance - instead, they have a bizarre fetishism of reproduction. This selector performs eugenics so that only those who reproduce a lot - and also help out their kin - are likely to pass on their genes. Such a selector is wholly unconcerned with human welfare. It makes it so that humans can't empathize with those far away, because warring with native tribes is effective. It makes it so that men are naturally more aggressive than women, committing 96% of homicides, all because in the evolutionary past it was beneficial - it enabled more efficient fighting, for instance. In fact, this selector has selected for individuals who are likely to engage in "rape . . . infanticide, abuse of low-status individuals, and murdering all those people over there and taking their stuff." It selects for individuals who reproduce frequently, rather than those who are good, moral, or even happy. In fact, in some other species, things are even worse. Some species give birth to hundreds of millions of eggs, many of whom contain sentient beings almost all of whom die a horrible painful death at a young age. This selector makes it so that male ducks have corkscrew penises so that they can rape female ducks more efficiently. This predictor has been operating for billions of years. Their amorality results in them designing both all sorts of horrifying, rapidly multiplying bacteria and viruses that kills lots of people and animals alike and various defense mechanisms. But after millions of years, it offers for you to take over their job. Rather than selecting for prolificness alone, you can affect which beings exist in the future with moral goals in mind! You can make it so that future beings are likely to be happy, kind, and just. Isn't this an improvement? But this is the world that we live in. Selection pressures exist as an inevitable fact of life. Evolution has shaped our behaviors. Our choice is not between selected behaviors and unselected behaviors. It is between behaviors selected by parents who have their children's best interests in mind, who want their children to be kind, happy, and healthy, and selection at the hands of the blind, idiot, Darwinian god, who has been practicing social Darwinism for millions of years, where only those who pass on their genes have their traits reproduced. Of course, this doesn't mean that we should practice the kind of harmful, coercive eugenics practiced by the Nazis. It doesn't mean we should prevent anyone from reproducing. But it does mean we should empower parents with the option of gene editing to improve their children's lives, rather than banning it. It means we should see making future people happier and more moral as a worthwhile goal. The amoral god that selects only for having many offspring has turned over the reigns to us. We can leave its ghastly designs in place, or instead change them to improve the lives of the future. I think th...

The Nonlinear Library: LessWrong
LW - Eugenics Performed By A Blind, Idiot God by omnizoid

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 18, 2023 3:42


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eugenics Performed By A Blind, Idiot God, published by omnizoid on September 18, 2023 on LessWrong. (Azathoth, Lovecraft's famous blind, amoral, idiot God.) Crosspost of this. I'm hugely in favor of gene editing and other actions that would improve the welfare of future people. If we could perform genetic engineering that made future people smarter, happier, and less likely to get diseases, I'd be in favor of it. This assumption is controversial. Many people think there's something immoral about changing the genes of humans, even in ways that are expected to improve their quality of life. They think that such an idea sounds too much like the clearly objectionable coercive eugenics of the Nazis, for instance. But you know what would be almost as bad as eugenics performed by Nazis - eugenics performed by a totally random, amoral selector. This eugenicist wouldn't have cruel ideas about Aryan superiority, for instance - instead, they have a bizarre fetishism of reproduction. This selector performs eugenics so that only those who reproduce a lot - and also help out their kin - are likely to pass on their genes. Such a selector is wholly unconcerned with human welfare. It makes it so that humans can't empathize with those far away, because warring with native tribes is effective. It makes it so that men are naturally more aggressive than women, committing 96% of homicides, all because in the evolutionary past it was beneficial - it enabled more efficient fighting, for instance. In fact, this selector has selected for individuals who are likely to engage in "rape . . . infanticide, abuse of low-status individuals, and murdering all those people over there and taking their stuff." It selects for individuals who reproduce frequently, rather than those who are good, moral, or even happy. In fact, in some other species, things are even worse. Some species give birth to hundreds of millions of eggs, many of whom contain sentient beings almost all of whom die a horrible painful death at a young age. This selector makes it so that male ducks have corkscrew penises so that they can rape female ducks more efficiently. This predictor has been operating for billions of years. Their amorality results in them designing both all sorts of horrifying, rapidly multiplying bacteria and viruses that kills lots of people and animals alike and various defense mechanisms. But after millions of years, it offers for you to take over their job. Rather than selecting for prolificness alone, you can affect which beings exist in the future with moral goals in mind! You can make it so that future beings are likely to be happy, kind, and just. Isn't this an improvement? But this is the world that we live in. Selection pressures exist as an inevitable fact of life. Evolution has shaped our behaviors. Our choice is not between selected behaviors and unselected behaviors. It is between behaviors selected by parents who have their children's best interests in mind, who want their children to be kind, happy, and healthy, and selection at the hands of the blind, idiot, Darwinian god, who has been practicing social Darwinism for millions of years, where only those who pass on their genes have their traits reproduced. Of course, this doesn't mean that we should practice the kind of harmful, coercive eugenics practiced by the Nazis. It doesn't mean we should prevent anyone from reproducing. But it does mean we should empower parents with the option of gene editing to improve their children's lives, rather than banning it. It means we should see making future people happier and more moral as a worthwhile goal. The amoral god that selects only for having many offspring has turned over the reigns to us. We can leave its ghastly designs in place, or instead change them to improve the lives of the future. I think th...

The Nonlinear Library
EA - Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by Omnizoid

The Nonlinear Library

Play Episode Listen Later Aug 27, 2023 58:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong, published by Omnizoid on August 27, 2023 on The Effective Altruism Forum. Introduction "After many years, I came to the conclusion that everything he says is false. . . . "He will lie just for the fun of it. Every one of his arguments was tinged and coded with falseness and pretense. It was like playing chess with extra pieces. It was all fake." Paul Postal (talking about Chomsky) (note, this is not exactly how I feel about Yudkowsky, I don't think he's knowingly dishonest, but I just thought it was a good quote and partially represents my attitude towards Yudkowsky). Crosspost of this on my blog. In the days of my youth, about two years ago, I was a big fan of Eliezer Yudkowsky. I read his many, many writings religiously, and thought that he was right about most things. In my final year of high school debate, I read a case that relied crucially on the many worlds interpretation of quantum physics - and that was largely a consequence of reading through Eliezer's quantum physics sequence. In fact, Eliezer's memorable phrasing that the many worlds interpretation "wins outright given the current state of evidence," was responsible for the title of my 44-part series arguing for utilitarianism titled "Utilitarianism Wins Outright." If you read my early articles, you can find my occasional blathering about reductionism and other features that make it clear that my worldview was at least somewhat influenced by Eliezer. But as I grew older and learned more, I realized it was all bullshit. Eliezer sounds good whenever he's talking about a topic that I don't know anything about. I know nothing about quantum physics, and he sounds persuasive when talking about quantum physics. But every single time he talks about a topic that I know anything about, with perhaps one or two exceptions, what he says is total nonsense, at least, when it's not just banal self-help advice. It is not just that I always end up disagreeing with him, it is that he says with almost total confidence outrageous falsehood after outrageous falsehood, making it completely clear he has no idea what he is talking about. And this happens almost every single time. It seems that, with few exceptions, whenever I know anything about a topic that he talks about, it becomes clear that his view is a house of cards, built entirely on falsehoods and misrepresentations. Why am I writing a hit piece on Yudkowsky? I certainly don't hate him. In fact, I'd guess that I agree with him much more than almost all people on earth. Most people believe lots of outrageous falsehoods. And I think that he has probably done more good than harm for the world by sounding the alarm about AI, which is a genuine risk. And I quite enjoy his scrappy, willing-to-be-contrarian personality. So why him? Part of this is caused by personal irritation. Each time I hear some rationalist blurt out "consciousness is just what an algorithm feels like from the inside," I lose a year of my life and my blood pressure doubles (some have hypothesized that the explanation for the year of lost life involves the doubling of my blood pressure). And I spend much more time listening to Yukowsky's followers spout nonsense than most other people. But a lot of it is that Yudkowsky has the ear of many influential people. He is one of the most influential AI ethicists around. Many people, my younger self included, have had their formative years hugely shaped by Yudkowsky's views - on tons of topics. As Eliezer says: In spite of how large my mistakes were, those two years of blog posting appeared to help a surprising number of people a surprising amount. Quadratic Rationality expresses a common sentiment, that the sequences, written by Eliezer, have significantly shaped the world view of him and others. Elie...

Cognitive Dissidents
Canadian Beef (RealAg Crosspost)

Cognitive Dissidents

Play Episode Listen Later Aug 14, 2023 27:32


Jacob is featured on RealAg Radio with Shaun Haney! They'll dig into China's economic and social standing and what Russia might do next. You'll also hear from Joyce Parslow, with Canada Beef, on new culture, lifestyle, health and wellness information programs launched for Canadian beef AND a spotlight interview with Tom Barrie of Koch Ag Services on comparing nitrogen stabilizers.Check out RealAg Radio here: https://www.realagriculture.com/category/realag-radio/--CI LinkedIn: https://www.linkedin.com/company/cognitive-investments/CI Website: https://cognitive.investmentsCI Twitter: https://twitter.com/CognitiveInvestJacob LinkedIn: https://www.linkedin.com/in/jacob-l-s-a9337416/Jacob Twitter: https://twitter.com/JacobShapSubscribe to the Newsletter: https://investments.us17.list-manage.com/subscribe?u=156086d89c91a42d264546df7&id=4e31ca1340--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Referenced In The Show:--Disclaimer: Nothing discussed on Cognitive Dissidents should be considered as investment advice. Please always do your own research & speak to a financial advisor before putting your money into the markets.This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

china canadian russia beef newsletter chartable crosspost shaun haney cognitive investments canada beef cognitive dissidents realag realag radio
History of Japan
Crosspost - The Five Men of Naniwa

History of Japan

Play Episode Listen Later Aug 11, 2023 62:21


In lieu of a traditional episode, enjoy this one from the archives of my other podcast Criminal Records! 

Audrey Helps Actors Podcast
Tipsy Casting episode crosspost - Jessica Sherman & Jenn Presser

Audrey Helps Actors Podcast

Play Episode Listen Later Aug 7, 2023 73:03


This week Audrey Moore gets drunk on the Tipsy Casting Podcast with Jessica Sherman and Jenn Presser and they all talk openly about the current SAG-AFTRA strike. They touch on the difference between being a hobbyist vs. being a professional in the industry, transparency within the industry, and an explanation of what's at stake for the actors and what strategy SAG-AFTRA is currently employing.

The Nonlinear Library
EA - Alignment Grantmaking is Funding-Limited Right Now [crosspost] by johnswentworth

The Nonlinear Library

Play Episode Listen Later Aug 3, 2023 2:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment Grantmaking is Funding-Limited Right Now [crosspost], published by johnswentworth on August 3, 2023 on The Effective Altruism Forum. For the past few years, I've generally mostly heard from alignment grantmakers that they're bottlenecked by projects/people they want to fund, not by amount of money. Grantmakers generally had no trouble funding the projects/people they found object-level promising, with money left over. In that environment, figuring out how to turn marginal dollars into new promising researchers/projects - e.g. by finding useful recruitment channels or designing useful training programs - was a major problem. Within the past month or two, that situation has reversed. My understanding is that alignment grantmaking is now mostly funding-bottlenecked. This is mostly based on word-of-mouth, but for instance, I heard that the recent lightspeed grants round received far more applications than they could fund which passed the bar for basic promising-ness. I've also heard that the Long-Term Future Fund (which funded my current grant) now has insufficient money for all the grants they'd like to fund. I don't know whether this is a temporary phenomenon, or longer-term. Alignment research has gone mainstream, so we should expect both more researchers interested and more funders interested. It may be that the researchers pivot a bit faster, but funders will catch up later. Or, it may be that the funding bottleneck becomes the new normal. Regardless, it seems like grantmaking is at least funding-bottlenecked right now. Some takeaways: If you have a big pile of money and would like to help, but haven't been donating much to alignment because the field wasn't money constrained, now is your time! If this situation is the new normal, then earning-to-give for alignment may look like a more useful option again. That said, at this point committing to an earning-to-give path would be a bet on this situation being the new normal. Grants for upskilling, training junior people, and recruitment make a lot less sense right now from grantmakers' perspective. For those applying for grants, asking for less money might make you more likely to be funded. (Historically, grantmakers consistently tell me that most people ask for less money than they should; I don't know whether that will change going forward, but now is an unusually probable time for it to change.) Note that I am not a grantmaker, I'm just passing on what I hear from grantmakers in casual conversation. If anyone with more knowledge wants to chime in, I'd appreciate it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME) by Otto

The Nonlinear Library

Play Episode Listen Later Jul 25, 2023 11:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] An AI Pause Is Humanity's Best Bet For Preventing Extinction (TIME), published by Otto on July 25, 2023 on The Effective Altruism Forum. Otto Barten is director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate. Joep Meindertsma is founder of PauseAI, a movement campaigning for an AI Pause.The existential risks posed by artificial intelligence (AI) are now widely recognized. After hundreds of industry and science leaders warned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the U.N. Secretary-General recently echoed their concern. So did the prime minister of the U.K., who is also investing 100 million pounds into AI safety research that is mostly meant to prevent existential risk. Other leaders are likely to follow in recognizing AI's ultimate threat. In the scientific field of existential risk, which studies the most likely causes of human extinction, AI is consistently ranked at the top of the list. In The Precipice, a book by Oxford existential risk researcher Toby Ord that aims to quantify human extinction risks, the likeliness of AI leading to human extinction exceeds that of climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined. One would expect that even for severe global problems, the risk that they lead to full human extinction is relatively small, and this is indeed true for most of the above risks. AI, however, may cause human extinction if only a few conditions are met. Among them is human-level AI, defined as an AI that can perform a broad range of cognitive tasks at least as well as we can. Studies outlining these ideas were previously known, but new AI breakthroughs have underlined their urgency: AI may be getting close to human level already. Recursive self-improvement is one of the reasons why existential-risk academics think human-level AI is so dangerous. Because human-level AI could do almost all tasks at our level, and since doing AI research is one of those tasks, advanced AI should therefore be able to improve the state of AI. Constantly improving AI would create a positive feedback loop with no scientifically established limits: an intelligence explosion. The endpoint of this intelligence explosion could be a superintelligence: a godlike AI that outsmarts us the way humans often outsmart insects. We would be no match for it. A godlike, superintelligent AI A superintelligent AI could therefore likely execute any goal it is given. Such a goal would be initially introduced by humans, but might come from a malicious actor, or not have been thought through carefully, or might get corrupted during training or deployment. If the resulting goal conflicts with what is in the best interest of humanity, a superintelligence would aim to execute it regardless. To do so, it could first hack large parts of the internet and then use any hardware connected to it. Or it could use its intelligence to construct narratives that are extremely convincing to us. Combined with hacked access to our social media timelines, it could create a fake reality on a massive scale. As Yuval Harari recently put it: "If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away - or even realise is there." As a third option, after either legally making money or hacking our financial system, a superintelligence could simply pay us to perform any actions it needs from us. And these are just some of the strategies a superintelligent AI could use in order to achieve its goals. There are likely many more. Like playing chess against grandmaster Magnus Carlsen, we cannot predict the moves he will play, but we can predict the outcome: we los...

Titanic Talkline
Damsels Who Discuss crosspost: The Damsels Discuss Pinocchio

Titanic Talkline

Play Episode Listen Later Jul 12, 2023 74:56


This is a special bonus episode! I started a new podcast, so here is the 2nd episode where I discuss Pinocchio with Gally! This episode the Damsels shift their attention to the classic film about a wooden puppet come to life, Pinocchio! In their discussion of the 1940 classic, Alexia wants to know Gepetto's prescription, and Gally asks about the lack of gun control in toy shops. They also discuss the racism, who is walking a gaggle of swans around the village, more racism, mysterious ships in the night, and why there is a fox and cat wandering around without anyone saying anything!Be sure to follow Damsels Who Discuss all over the internet!@DamselsWhoDisco on TwitterDamsels Who Discuss on FacebookDamselsWhoDiscuss on IGBe sure to like and subscribe to the show on your favorite podcasting platform!@TitanicTalkine on TwitterTitanicTalkline on FacebookTitanicTalkline on IG Hosted on Acast. See acast.com/privacy for more information.

Cognitive Dissidents
The Russian Cascade - RealAgriculture Crosspost

Cognitive Dissidents

Play Episode Listen Later Jun 16, 2023 38:25


Jacob sits down with RealAgriculture's Shaun Haney to discuss the geopolitical ramifications of Russia's invasion of Ukraine. They walk through the Black Sea Grain Initiative, food prices, Ukraine's escalating counteroffensive, and China's lowkey efforts to arm Russia. Shaun and Jacob will be chatting once a month about the intersections of geopolitics and agriculture, with crossposts across all of our channels!Check out RealAg here - https://www.realagriculture.com/--CI LinkedIn: https://www.linkedin.com/company/cognitive-investments/CI Website: https://cognitive.investmentsCI Twitter: https://twitter.com/CognitiveInvestJacob LinkedIn: https://www.linkedin.com/in/jacob-l-s-a9337416/Jacob Twitter: https://twitter.com/JacobShapSubscribe to the Newsletter: https://investments.us17.list-manage.com/subscribe?u=156086d89c91a42d264546df7&id=4e31ca1340--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Disclaimer: Nothing discussed on Cognitive Dissidents should be considered as investment advice. Please always do your own research & speak to a financial advisor before putting your money into the markets.This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

The Nonlinear Library
EA - The Common Sense View on Meat Implies Some Pretty Radical Conclusions by Omnizoid

The Nonlinear Library

Play Episode Listen Later Jun 14, 2023 6:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Common Sense View on Meat Implies Some Pretty Radical Conclusions, published by Omnizoid on June 14, 2023 on The Effective Altruism Forum. Crosspost of this from my Substack. Somewhere, right now, a cat is probably being harmed. Some psychopathic child is stomping on it, or beating it, or burying it alive, or dousing it in gasoline before he sets it on fire. This is, sadly, not uncommon. And almost everyone agrees that it's bad. Almost everyone agrees that, though animals don't have the same rights as people, we should not hurt them for trivial reasons. Even if this child and a few friends are having a good time hurting a cat, this is still immoral. And it's wrong not just because of what it does to the character of the boys—it's partly wrong because of what is being done to the cat. Torturing a cat is worse than torturing a robot that one believes to be a cat, even though they'd have the same effects on one's character because one causes actual torment to an animal and the other does not. The common sense view around animals seems to be “it's okay to eat them, but we should try our hardest not to mistreat them.” The philosophical merits of such a position can be debated, but this seems like something that almost everyone agrees with. But if this is true, then common sense condemns our current ghastly mistreatment of animals. People act like the vegan position that our current meat eating is seriously wrong is a radical position—a position that requires believing extreme views. But it's not—it's about as moderate as you can get. It's not radical to be opposed to eating eggs from chickens that were stuffed in a cage, covered in the falling, acidic feces of those above them, that burns their flesh and enters their nose, making it impossible to breathe and making their eyes constantly burn. It's not radical to not want to buy eggs from chickens stuffed in a tiny wire cage that causes them to develop foot conditions, constantly rubbing against sharp metal, too small for them to ever be able to turn around or lie down comfortably. It's not radical not to want to buy eggs from suppliers when every second of every day, there are at least 6 million hens being systematically starved, because there's a way to trick the bodies of the hens into thinking it's egg-laying season by starving them, resulting in more eggs being laid. When this is the way that 5-10% of hens die, it's not radical not to want to pay to exacerbate their torment. When the egg industry grinds up billions of bouncy baby male chicks, just because they can't lay eggs, it isn't radical to say that we shouldn't be paying them to grind up more. When the conditions are so bad that the hens go crazy and throw themselves against the sides of the cages, every natural behavior thwarted, everything that might bring them joy snuffed out in the dark, filthy, disgusting juggernauts of death, torment, and despair, it's not radical to not want to fund that. It's not radical to think that it's wrong to buy chickens when they have been artificially engineered to be in constant pain—their entire bodies twisted and warped into maximally efficient machines for generating flesh. When thousands of chickens are stuffed into crates in transport every hour in extreme weather, killing 5-10% of them, it's not wrong to say that we won't pay for that until they stop their systematic abuse. Chickens live in windowless sheds, constantly sleep-deprived from artificial lighting. When animals have their beaks, tails, and testicles cut off with a sharp knife, with no anesthetic, when they're given third-degree burns because it makes it easier to identify them, it isn't wrong to refuse to pay for that until the people systematically torturing them can get their act together and stop the torture. And yes, it is torture. This is not hyperbole...

The Nonlinear Library
EA - Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms by Omnizoid

The Nonlinear Library

Play Episode Listen Later May 4, 2023 44:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum. Crosspost of this on my blog. 1 Introduction See, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house. Random person on the bizarre anti-vegan subreddit I've previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it's about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions. If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it's wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it's wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm. Here, I'll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notes It would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice? Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we'd be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history. Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we'd regard that as torture. Unfortunately, what we do to billions of animals is far worse. 2 Pigs Just like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing. Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It's hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs. Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr...

Under the Electric Stars
CROSSPOST - Petja: A Hi Nay Love Story

Under the Electric Stars

Play Episode Listen Later Apr 21, 2023 41:19


This is a crosspost with Hi Nay, a supernatural horror podcast created by Motzie Dapul. They're currently crowdfunding for their show, so please give them a listen and toss them some money if you have the means! Check them out at https://hinaypod.podbean.com/ SUPPORT THE FUNDRAISER FOR MORE HI NAY at ko-fi.com/hinaypod We're continuing our fundraiser for Hi Nay so we can hit our stretch goals. More info here: https://hinaypod.tumblr.com/post/713958275663691776/buy-us-a-milk-tea PETJA: A HI NAY LOVE STORY Content Warnings: Toxic romance, power imbalance, self-loathing, mutilation, self-harm, body horror, puppets__ The story of Petja, who loved a Puppetmaster with all his heart - and body, mind, and soul. __ The music used is Swan Lake, Op. 20, Act II: 14. Scène by Pyotr Ilyich Tchaikovsky.

DekNet
CROSSPOST con RETROMATICA. Nativos Digitales

DekNet

Play Episode Listen Later Mar 31, 2023 100:51


The Nonlinear Library
EA - [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever by Otto

The Nonlinear Library

Play Episode Listen Later Mar 8, 2023 6:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum. This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIY Barten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit. Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child's life, or declare their love to us. Yet this all happened. It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern. Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field's inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level? Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today's best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete. But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits. This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...

Cognitive Dissidents
Crosspost: How does the Russia-Ukraine war end? w/realagriculture

Cognitive Dissidents

Play Episode Listen Later Mar 6, 2023 33:23


This episode is a crossover episode with realagriculture! RealAg Radio host Shaun Haney and Jacob sit down for an in-depth discussion on the factors influencing the Russia-Ukraine war, one year in. From leaders' messaging, to Western involvement, to figuring out China, and finally to how it all ends, Shapiro shares his thoughts.Catch the original here: https://www.realagriculture.com/2023/02/how-does-the-russia-ukraine-war-end-on-china-the-west-and-the-fog-of-war-with-jacob-shapiro/--CI LinkedIn: https://www.linkedin.com/company/cognitive-investments/CI Website: https://cognitive.investmentsCI Twitter: https://twitter.com/CognitiveInvestJacob LinkedIn: https://www.linkedin.com/in/jacob-l-s-a9337416/Jacob Twitter: https://twitter.com/JacobShapSubscribe to the Newsletter: https://investments.us17.list-manage.com/subscribe?u=156086d89c91a42d264546df7&id=4e31ca1340--Cognitive Investments is an investment advisory firm, founded in 2019 that provides clients with a nuanced array of financial planning, investment advisory and wealth management services. We aim to grow both our clients' material wealth (i.e. their existing financial assets) and their human wealth (i.e. their ability to make good strategic decisions for their business, family, and career).--Disclaimer: Nothing discussed on Cognitive Dissidents should be considered as investment advice. Please always do your own research & speak to a financial advisor before putting your money into the markets.This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp

china western newsletter russia ukraine shapiro russia ukraine war chartable crosspost shaun haney cognitive investments realagriculture cognitive dissidents realag radio
The Nonlinear Library
LW - [Crosspost] ACX 2022 Prediction Contest Results by Scott Alexander

The Nonlinear Library

Play Episode Listen Later Jan 24, 2023 18:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] ACX 2022 Prediction Contest Results, published by Scott Alexander on January 24, 2023 on LessWrong. Original here. Submission statement/relevance to Less Wrong: This forecasting contest confirmed some things we already believed, like that superforecasters can consistently outperform others, or the "wisdom of crowds" effect. It also found a surprising benefit of prediction markets over other aggregation methods, which might or might not be spurious. Several members of the EA and rationalist community scored highly, including one professional AI forecaster. But Less Wrongers didn't consistently outperform members of the general (ACX-reading, forecasting-competition-entering) population. Last year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right? In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon's forecasting division. I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn't a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn't be amused - but it had the great advantage of allowing objective scoring. Our goal wasn't just to identify good predictors. It was to replicate previous findings about the nature of prediction. Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people's predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal? After 2022 ended, Sam and Eric used a technique called log-loss scoring to grade everyone's probability estimates. Lower scores are better. The details are hard to explain, but for our contest, guessing 50% for everything would give a score of 40.21, and complete omniscience would give a perfect score of 0. Here's how the contest went: As mentioned above: guessing 50% corresponds to a score of 40.2. This would have put you in the eleventh percentile (yes, 11% of participants did worse than chance). Philip Tetlock and his team have identified “superforecasters” - people who seem to do surprisingly well at prediction tasks, again and again. Some of Tetlock's picks kindly agreed to participate in this contest and let me test them. The median superforecaster outscored 84% of other participants. The “wisdom of crowds” hypothesis says that averaging many ordinary people's predictions produces a “smoothed-out” prediction at least as good as experts. That proved true here. An aggregate created by averaging all 508 participants' guesses scored at the 84th percentile, equaling superforecaster performance. There are fancy ways to adjust people's predictions before aggregating them that outperformed simple averaging in the previous experiments. Eric tried one of these methods, and it scored at the 85th percentile, barely better than the simple average. Crowds can beat smart people, but crowds of smart people do best of all. The aggregate of the 12 participating superforecasters scored at the 97th percentile. Prediction markets did extraordinarily well during this competition, scoring at the 99.5th percentile - ie they beat 506 of the 508 participants, plus all other forms of aggregation. But this is an unfair comparison: our participants were only allowed to spend five minut...

The Nonlinear Library: LessWrong
LW - [Crosspost] ACX 2022 Prediction Contest Results by Scott Alexander

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 24, 2023 18:44


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] ACX 2022 Prediction Contest Results, published by Scott Alexander on January 24, 2023 on LessWrong. Original here. Submission statement/relevance to Less Wrong: This forecasting contest confirmed some things we already believed, like that superforecasters can consistently outperform others, or the "wisdom of crowds" effect. It also found a surprising benefit of prediction markets over other aggregation methods, which might or might not be spurious. Several members of the EA and rationalist community scored highly, including one professional AI forecaster. But Less Wrongers didn't consistently outperform members of the general (ACX-reading, forecasting-competition-entering) population. Last year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right? In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon's forecasting division. I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn't a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn't be amused - but it had the great advantage of allowing objective scoring. Our goal wasn't just to identify good predictors. It was to replicate previous findings about the nature of prediction. Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people's predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal? After 2022 ended, Sam and Eric used a technique called log-loss scoring to grade everyone's probability estimates. Lower scores are better. The details are hard to explain, but for our contest, guessing 50% for everything would give a score of 40.21, and complete omniscience would give a perfect score of 0. Here's how the contest went: As mentioned above: guessing 50% corresponds to a score of 40.2. This would have put you in the eleventh percentile (yes, 11% of participants did worse than chance). Philip Tetlock and his team have identified “superforecasters” - people who seem to do surprisingly well at prediction tasks, again and again. Some of Tetlock's picks kindly agreed to participate in this contest and let me test them. The median superforecaster outscored 84% of other participants. The “wisdom of crowds” hypothesis says that averaging many ordinary people's predictions produces a “smoothed-out” prediction at least as good as experts. That proved true here. An aggregate created by averaging all 508 participants' guesses scored at the 84th percentile, equaling superforecaster performance. There are fancy ways to adjust people's predictions before aggregating them that outperformed simple averaging in the previous experiments. Eric tried one of these methods, and it scored at the 85th percentile, barely better than the simple average. Crowds can beat smart people, but crowds of smart people do best of all. The aggregate of the 12 participating superforecasters scored at the 97th percentile. Prediction markets did extraordinarily well during this competition, scoring at the 99.5th percentile - ie they beat 506 of the 508 participants, plus all other forms of aggregation. But this is an unfair comparison: our participants were only allowed to spend five minut...

What the Hell is a Pastor?
Crosspost: Bar of the Conference

What the Hell is a Pastor?

Play Episode Listen Later Jan 16, 2023 57:27


In which we share episode 2 of the new podcast, Bar of the Conference, where friend of the pod, Ian, joins host Derrick Scott III to discuss holy conferencing and much, much more. Find Bar of the Conference wherever you find podcasts, and get more information on the podcasts Derrick works on at https://www.wesleysrevival.com/. WHAT THE HELL IS A PASTOR HAS MERCH! https://www.bonfire.com/what-the-hell-is-a-pastor-theme-tee/ https://www.bonfire.com/wthiap-the-void/ Excited about WTHIAP OTR (What the Hell is a Pastor on the Road)? Support us over on Patreon to make that dream a reality: https://www.patreon.com/wthiap. Want to reach out? Email us at whatthehellisapastor@gmail.com. Like Twitter/facebook/instagram? We do too, we guess. Find us under the handle @WTHIAP.

The Nonlinear Library
EA - [Crosspost]: Huge volcanic eruptions: time to prepare (Nature) by Mike Cassidy

The Nonlinear Library

Play Episode Listen Later Aug 19, 2022 2:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost]: Huge volcanic eruptions: time to prepare (Nature), published by Mike Cassidy on August 19, 2022 on The Effective Altruism Forum. Lara Mani and I have a comment article published in Nature this week about large magnitude volcanic eruptions: TLDR: I also wrote a twitter thread here: This is more condensed focus piece, but contains elements we've covered in these posts too. This is really the start of the work we've been doing in this area, we're hoping to quantify how globally catastrophic large eruptions would be for our global food, water and critical systems. From there, we'll have a better idea of the most effective mitigation strategies. But because this is such a neglected area (screenshot below), we know that even modest investment and effort will go a long way. We highlight several ways we think could help save a lot of lives both in the near term (smaller, more frequent eruptions) and the in future (large mag and super-eruptions): a) pinpointing where the biggest risks area/volcanoes are b) increasing and improving monitoring c) increasing preparedness (e.g. nowcasting-see below), and d) research volcano geoengineering (the ethics of which we're working with Anders Sandberg on). The last point may interest some others in the x-risk community, as potential solutions like these ones (screenshot below), could potentially help mitigate the effects from nuclear, and asteroid winters too. We're having conversations with atmospheric scientists about this type of research. Another way tech-savy EAs might be able to help with is the creation of 'nowcasting' technology, which again would be useful for a range of Global Catastrophic Risks. The paper has been covered a fair bit in the international media, (e.g.) and we feel like we could use this mometumn to make some tractable improvements to global volcanic risk. If you'd like to help fund our work or discuss any of these ideas with us then get in touch! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

IdjitCast – QuadrupleZ
Last Crosspost, Thoughts on That 80's Show 01 “Pilot”

IdjitCast – QuadrupleZ

Play Episode Listen Later Aug 3, 2022 5:07


I hope you will join me on the main feed for “One Idjit’s Thoughts On…,” or at least on the Dog Days of Podcasting Master feed. This will be the last episode of One Idjit’s Thoughts On that I post in the Idjitcast feed. Send us your feedback! www.facebook.com/groups/idjitcast You can also find a rough schedule of our…Continue reading →

IdjitCast – QuadrupleZ
Crosspost of What's Next

IdjitCast – QuadrupleZ

Play Episode Listen Later Aug 2, 2022 5:35


This is another crosspost of a “One Idjit’s Thoughts On…” podcast episode. I think we’ll do one more crosspost tomorrow and anyone who wants to come along can either find “One Idjit’s Thoughts On…” or go to dogdaysofpodcasting.com and listen to the master feed of all the Dog Days shows. Send us your feedback! www.facebook.com/groups/idjitcast You can…Continue reading →

IdjitCast – QuadrupleZ
CrossPost One Idjit's Thoughts on Minicasts

IdjitCast – QuadrupleZ

Play Episode Listen Later Aug 1, 2022 4:25


This is a crosspost from the new feed. Episode 1 is One Idjit’s Thoughts on Minicasts. Apologies if you have signed up for Idjitcast, One Idjit’s Thoughts, and the Dog Days master feed, as you’ll get this three times, I will stop crossposting when I get into the main swing of things. Send us your…Continue reading →

Unsinkable: The Titanic Podcast
Crosspost: Unsinkable x Titanic Talkline!

Unsinkable: The Titanic Podcast

Play Episode Listen Later Jul 29, 2022 88:46


A very special crosspost: my appearance on the new podcast Titanic Talkline. I was honored to be in the first bath of guests, and Alexia's interviews are refreshing and fun. Make sure to search "Titanic Talkline" in whatever pod player you use and subscribe!Find info here as well: https://www.facebook.com/TitanicTalkline/Support the show

My Hero Academia Podcast
Crosspost The View From the Top Ep 31 Haikyu!! Chapter 402.1

My Hero Academia Podcast

Play Episode Listen Later May 2, 2022 82:17


Welcome to a special crosspost of the View From the Top: a Haikyu!! Podcast. MHA is on break this week so we are crossposting the return of the View From the Top covering a bonus chapter of Haikyu!!. Kendra (@sniperofmyheart), Gabi (@yamineftis), Lisa (@LisLisLiso), and Marion (@microwaevy) reintroduce themselves after two years on hiatus (0:52), cover the latest Haikyu!! news (14:17), readthrough and discuss the official release of Haikyu!! Chapter 402.1 "A Party Reignited", and take listener questions (1:14:42). Lisa's zines @daisugazine @hqteamzines are still open for sales! and all of Marion's projects here. Twitter: @haikyupod https://directory.libsyn.com/shows/view/id/775f63e7-bdcb-4016-a89b-15040bfbdd94 RSS feed: http://feeds.libsyn.com/413288/rss?_ga=2.155339775.572751016.1651719142-426439008.1642458587

The Antifada
CROSSPOST: Castrating The Bitcoin Bull w/ Jamie, Aaron & Jorge

The Antifada

Play Episode Listen Later Apr 20, 2022 87:18


In their first ever CURRENT EVENTS episode, the Everybody Loves Communism crew of Jamie Peck, Aaron Thorpe (he's back, y'all!) and Jorge Rocha (@linegoesdown) take on various topics of the day, including Peter Thiel's Bitcoin meltdown and NYC mayor Eric Adams' love of crypto. Okay, so it's mostly about Bitcoin. What does it mean that various capitalists are having it out over this dumb new commodity? To the 1,000 or so of you who are already listening to ELC on the regs: sorry for the dearth of slop this week. Come to the Eve 6 afterparty and Jamie will buy you a drink. No seriously, she's throwing the afterparty for the 5/4 Eve 6 NYC tour date. DM for deets! Listen to ELC and give us money: Patreon.com/everybodylovescommunism or Fans.fm/everybodylovescommunism Follow ELC on Twitter: @ELCpod

Disgorgeous
FLAWLESS Season Finale ft Disgorgeous

Disgorgeous

Play Episode Listen Later Mar 14, 2022 64:53


*Crosspost w/ our beautiful child, Flawless*The end of an era! To finish off our series, we investigate the biggest flaw of all, capitalism, with our host dads aka wine menaces to society, the boys from Disgorgeous. We go up, down, all around, a little bit of Monica, a lot about feelings, and obviously about how wine's human effects can be catalysts for good change.we drank:19 Crimes Rosé19 Crimes Cali RedBarefoot Bubbly Sweet RoséBartenura Moscato d'AstiDark Horse Pinot GrigioMyx Moscato & PeachMyx Moscato & GrapeSupport the show (https://www.patreon.com/Disgorgeous)

The Nonlinear Library
EA - Elon Musk donates $5.7 Billion, will work with Igor Kurganov [Crosspost] by Nathan Young

The Nonlinear Library

Play Episode Listen Later Feb 16, 2022 5:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elon Musk donates $5.7 Billion, will work with Igor Kurganov [Crosspost], published by Nathan Young on February 16, 2022 on The Effective Altruism Forum. Bloomberg reports the following - without permission"Elon Musk, it seems, is finally becoming a philanthropist on the scale of his billionaire peers. The electric car and space mogul gifted $5.7 billion worth of Tesla Inc. stock to charity in the span of 10 days in November -- many times more than he's given away through his eponymous foundation in the two decades since it was founded. Where that donation is going is a mystery, but it's just one more signal that the world's richest person is taking philanthropy more seriously. The decision by Musk, 50, to donate more than 5 million shares in the electric-car maker was disclosed in a regulatory filing Monday night, and comes on the heels of some of his biggest-ever philanthropic commitments -- though nothing has come close to the scale of billions of dollars. It would also help reduce what he called the biggest tax bill in U.S. history. The Musk Foundation in the past couple of years has made eight-figure grants to the city and local school system near his South Texas spaceport, a $100 million ready-made competition to fight climate change and millions of dollars to a pair of Covid-19 researchers. Musk at the construction site for a new Tesla Gigafactory near Gruenheide, Germany in 2020. Almost all of those recipients have been primarily working with Igor Kurganov, a professional poker player-turned-philanthropist, who Musk has recently enlisted to keep in contact with grantees and consider their proposals. Effective Altruism Kurganov, who has won more than $18 million during his poker career, is active in the world of so-called effective altruism, a philanthropic and philosophical movement that tries to have the greatest impact by carefully spending money to solve problems. Kurganov is the co-founder of Raising for Effective Giving, an organization created by a group of poker players that recommends “highly cost-effective charities.” He's also an adviser to the Forethought Foundation, a project by the Centre for Effective Altruism. Calls and emails to Kurganov haven't been returned. Jared Birchall, the managing director of Musk's family office, and Aaron Beckman, Tesla's senior manager of stock administration, didn't respond to requests for comment. If Musk's foundation is turning to effective altruism, it hasn't showed yet, said Alixandra Barasch, an associate professor of marketing at New York University who has done research on the movement. Musk doesn't regularly publicize his gifts and tax documents that provide such disclosures have a years-long delay. What Musk has shared recently includes $10 million to Brownsville, Texas, near where SpaceX is located, to revitalize its downtown. So far, that money has gone to a grants program for local property owners, lighting projects and murals. Scale Up Saving lives is the primary focus of effective altruism, and “murals don't do that much good in terms of the way it's measured and defined in the movement,” Barasch said. Other gifts from Musk, like the $20 million for Cameron County schools, the $100 million commitment for the XPrize Carbon Removal competition and $5 million for Khan Academy, similarly don't have the focus that effective altruism typically does, Barasch said. But scale is also a factor, she said, and if Musk's foundation is influenced by the philosophy, his giving could increase significantly. “Looking at those amounts I'm like, ‘Holy moly, there's millions of dollars, it's a lot of money,'” Barasch said. But “it's nothing compared to what he's worth.” Musk has a $227 billion personal fortune, according to the Bloomberg Billionaires Index. The donation of $5.7 billion of Tesla stock could be a sign that...

Lost in Lambduhhs
:michelle-lim (Story of a crosspost party!)

Lost in Lambduhhs

Play Episode Listen Later Nov 23, 2021 93:01


Yukkin it up with some buddies! Yet again I crash the Defn Podcast party- and let me tell you a PARTY it is. This is a crosspost and was recorded a month or two ago while I was on the road traveling through CA. [for some context] At the time of recording I was posted up in a sorry excuse for a hotel room with SHAMEFULLY SLOW internet in lovely(?) Silicon Valley, CA. We talkin mentorship, music!, web-development, react native with Krell vs reactjs, tailwind, how to use STORYBOOK to quickly dev components, the experiences of working at vouch.io and so much more!! Enjoy!!! Bird app links: Michelle - https://twitter.com/eemshi91 Defn Podcast- https://twitter.com/DefnPodcast Paula- https://twitter.com/quoll A bit more context about the stack at vouch.io: https://vouch.io/developing-mobile-digital-key-applications-with-clojurescript/ https://github.com/vouch-opensource/krell

The Huntress Podcast
Bonus Episode: Reverend Hunter Interview with Murphy (Crosspost)

The Huntress Podcast

Play Episode Listen Later May 5, 2021 69:51


Tony Jones of the Reverend Hunter podcast interviews Murphy about hunting and spiritual practice. The two discuss gender, honoring our prey, prayer and hunting, hunting myths from Christian, Norse, and Greek Mythology, and more. Enjoy this respectful and engaging conversation between two very different hunters who explore their differences and common ground.

The Better Human Podcast
#131 – Colin Stuckert: Mindset Coach - Crosspost from the K8 4 Wellness Podcast

The Better Human Podcast

Play Episode Listen Later Mar 13, 2021 62:44


Colin Stuckert: Mindset Coach - Crosspost from the K8 4 Wellness PodcastThis episode of the Better Human Podcast is a crosspost from the K8 4 Wellness Podcast, where host Kate Cretsinger interviewed Colin on everything related to mindset, health, first principles, his educational programmes and much more. Tune in to get to know your favorite host, Colin, a little bit better, because today the tables are turned - the interviewer becomes the interviewee!Don't forget to check out Kate's amazing work over at k84wellness.com!About Show:

The Antifada
Crosspost: Year Zero - Thousand Year Stare w/ Terance Ray & Sean KB

The Antifada

Play Episode Listen Later May 25, 2020 109:44


(Crosspost from an episode Sean and Tarence did for those who missed it.) Introducing Year Zero, a new miniseries on political economy hosted by Tarence. In this first episode we speak with Sean KB (@as_a_worker on twitter)from the Antifada podcast (@the_antifada) about what he terms The Thousand Year Stare: the specific feeling one unlocks by reading Giovanni Arrighi's "The Long Twentieth Century." We'll be using the book to discuss the chaotic times in which we live, but don't worry: you don't have to have read it to follow along in the conversation. You just have to be willing to face the past, present, and future with a perfected thousand year stare...