Podcasts about rationalist

Philosophical view that reason should be the chief source of knowledge

  • 127PODCASTS
  • 13,728EPISODES
  • 13mAVG DURATION
  • 10+DAILY NEW EPISODES
  • Apr 21, 2025LATEST
rationalist

POPULARITY

20172018201920202021202220232024


Best podcasts about rationalist

Latest podcast episodes about rationalist

The Farm Podcast Mach II
Zizians, Mendicants, Basilisks & More Weird Tales w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Apr 21, 2025 123:50


Zizians, technofeudalism, Rationalist movement, COINTELPRO, Philadelphia/Wilmington suburbs, seasteading, Vassarites, PayPal mafia, Bay Area, Medieval era, Mendicants, Effective Altruism (EA), Sam Bankman-Fried (SBF), FTX, cryptocurrency, cybernetics, science fiction, techno-utopianism, the American obsession with technology/science, Extropianism, Accelerationism, AI, Roko's Basilisk, DOGE, cypherpunks, assassination politics, behavior modification, cults, ketamine, Leverage Research, ARTICHOKE/MK-ULTRA, the brain as a computer, Programmed to Kill, modern proliferation of cults, Order of Nine Angles (O9A), Maniac Murder Cult (MKY), digital Gladio, networking, decentralized finance (DeFi), digital commonsPurchase Weird Tales :Amazon: https://www.amazon.com/Weird-Tales-Zizians-Crypto-Demiurges/dp/B0F48538C6?ref_=ast_author_dpEbook (KDP/PDF): https://thefarmpodcast.store/Music by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Iron Sharpens Iron Radio with Chris Arnzen
April 7, 2025 Show with Jerry Johnson on “Was the Apostle Peter a Rationalist?”

Iron Sharpens Iron Radio with Chris Arnzen

Play Episode Listen Later Apr 8, 2025 119:49


April 7, 2025 JERRY JOHNSON,Reformed Christian apologist &renowned documentarian, mostwell known for cowriting & co-producing the popular DVD series:“Amazing Grace: The History &Theology of Calvinism”, & hiswebcast series, “Against theWorld”, who will address: “WAS the APOSTLE PETERa RATIONALIST?” Subscribe: Listen:

War College
The Cult of Rationalism in Silicon Valley

War College

Play Episode Listen Later Mar 25, 2025 61:34


A lot of the people designing America's technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it's influenced the world we live in today and how it created the environment for a cult that's got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat's a Zizian?Sith VegansAnselm: Ontological Argument for God's ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn't Enough. We Need to Shut it All Down - Eliezer Yudkowsky's TIME Magazine pieceExplaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist' rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.

Machshavah Lab
Ki Tisa: Shadal's Anti-Rationalist Rational Explanation of Ayin ha'Ra (the Evil Eye)

Machshavah Lab

Play Episode Listen Later Mar 17, 2025 12:08


Have any questions, insights, or feedback? Send me a text!Length of article: 3 pagesLength of audio: 11 minutes 6 secondsSynopsis: This is the audio version of the 3-page article I wrote and published on rabbischneeweiss.substack.com/ on 3/17/25, titled: Ki Tisa: Shadal's Anti-Rationalist Rational Explanation of Ayin ha'Ra (the Evil Eye). What if I told you there's an explanation of ayin ha'ra that is neither purely rationalistic nor mystical? True to form, Shadal offers just such an explanation.-----The Torah content for this week has been sponsored by my friend, Rabbi Dr. Elie Feder. His latest book, Happiness in the Face of Adversity: Powerful Torah Ideas from a Mom's Parting Words, shares the wisdom of Shani Feder a"h, a true Eishes Chayil. This is the kind of Torah I wish more people knew—ideas that directly impact our experience of life. Available now on Amazon.-----If you've gained from what you've learned here, please consider contributing to my Patreon at www.patreon.com/rabbischneeweiss. Alternatively, if you would like to make a direct contribution to the "Rabbi Schneeweiss Torah Content Fund," my Venmo is @Matt-Schneeweiss, and my Zelle and PayPal are mattschneeweiss at gmail. Even a small contribution goes a long way to covering the costs of my podcasts, and will provide me with the financial freedom to produce even more Torah content for you.If you would like to sponsor a day's or a week's worth of content, or if you are interested in enlisting my services as a teacher or tutor, you can reach me at rabbischneeweiss at gmail. Thank you to my listeners for listening, thank you to my readers for reading, and thank you to my supporters for supporting my efforts to make Torah ideas available and accessible to everyone.-----Substack: rabbischneeweiss.substack.com/Patreon: patreon.com/rabbischneeweissYouTube Channel: youtube.com/rabbischneeweissInstagram: instagram.com/rabbischneeweiss/"The Stoic Jew" Podcast: thestoicjew.buzzsprout.com"Machshavah Lab" Podcast: machshavahlab.buzzsprout.com"The Mishlei Podcast": mishlei.buzzsprout.com"Rambam Bekius" Podcast: rambambekius.buzzsprout.com"The Tefilah Podcast": tefilah.buzzsprout.comOld Blog: kolhaseridim.blogspot.com/WhatsApp Content Hub (where I post all my content and announce my public classes): https://chat.whatsapp.com/GEB1EPIAarsELfHWuI2k0HAmazon Wishlist: amazon.com/hz/wishlist/ls/Y72CSP86S24W?ref_=wl_sharel

amazon happiness mom paypal ra substack explanation torah venmo alternatively zelle evil eye rabbi dr parting words ki tisa ayin rationalist stoic jew machshavah lab mishlei podcast rambam bekius tefilah podcast rabbi schneeweiss torah content fund matt schneeweiss
Behind the Bastards
Part Two: The Zizians: Birth of a Cult Leader

Behind the Bastards

Play Episode Listen Later Mar 13, 2025 77:36 Transcription Available


Robert tells David Gborie about the early life of Ziz LaSota, a bright young girl from Alaska who came to the Bay Area with dreams of saving the cosmos or destroying it, all based on her obsession with Rationalist blogs and fanfic.See omnystudio.com/listener for privacy information.

Earth Ancients
George A. Sarantitis: Plato's Atlantis

Earth Ancients

Play Episode Listen Later Mar 8, 2025 103:18


What you are about to read in these pages, is part of a long and systematic research begun with the intention of re-examining Plato's works of Timaeus and Critias.Its original objective was exceeded by some of those achieved. The idea was to make a connotatively accurate translation through which to examine the logic of the ancient Greek myths and of Plato the Rationalist in the role of Mythographer. This sort of translation is probably without precedence. It is certainly not commercially viable at this stage. But it does provide the accurate sense of every word, phrase, line, paragraph and passage of the ancient text. To ‘study the logic of myths', means to conduct an examination of a mythical account in order to see whether it contains connotations, terms, expressions or a particular form of writing in which can be identified possible axioms, laws, principles and rules or perhaps a systemic procedure that allows the taxonomy of what is true and what is false. To ‘study the logic of Plato', means to seek the rationale of the mythographer and what method (if any) he applied when writing a myth and to determine whether he combined truths and falsehoods and if so, why.Ultimately, to study what a myth is in purpose and in function because  the Ancient Greek Myths have shown that contain many and important true information. So, why one should write such a true story in a such way that looks false?Plato was preferred because he has always been regarded as the representation of Rationalism, which somehow seems incompatible with creative writing.  Accordingly, the myth chosen as the most appropriate for examination was that of Atlantis because of its workable length -neither too long nor short-, its descriptive elements and the acknowledged authenticity of its author Plato.The results of this taxing, in every aspect, investigation, as the reader will quickly come to appreciate from simply reading the information herein,  were entirely unexpected and cannot be regarded as anything less than astounding.(As a whole, the cost of the first research has exceeded € 200.000 and has run into thousands of man-hours. Besides the wealth of information here, there's much more and just as rich.)Although the project's initial intent was to study the logic in myths and mythographers and which study yielded an unexpected amount of data as well as a formal structure to myths, the investigation went on to lead to somewhere completely different and by so doing, reward the author with a magnificent prize (amongst many), namely, the full decipherment of the myth of Atlantis and revelation of the whole truth!There now remains for the archaeologists to confirm these groundbreaking findings since, History seeks the truth, while Archaeology seeks the evidence.The two parts of the Methodology of Mythology (MoM1 and MoM2) that follow are in brief outline and almost exactly as when presented at an international conference of Philosophy and at other scientific meetings and scholarly proceedings, where they made excellent impressions to corresponding acclaim. They reveal a hitherto unknown dimension to myths, at least to those written by Plato and Homer. It is the application of a singular method which sorts out the truths and falsehoods contained in the myth.  The MoM also revealed a way of writing which conceals information in outwardly straightforward text, information that would have been discernable only to whoever had been instructed as to this esoteric form of writing.The third part of the project is about Atlantis and its analysis in the book ‘The Apocalypse* of a Myth'. It deals with the decipherment of the myth and the identification of Atlantis as a physical entity. The reader of this site is recommended to go first into MoM1 & 2 and then into the part on Atlantis. It is not obligatory to follow this sequence but it will facilitate the reader's realization that the recount of Atlantis is not a ‘regular' story and has much hidden beneath the surface, even a tiny part is presented here. Certainly, Plato's reports do not make for straightforward or easy comprehension. If they did, the ambiguity surrounding Atlantis for the past ~2.300 years would not have remained so mystifying, simply because it would have been resolved long ago. The reader of this website will almost certainly come to appreciate the words of warning and prior notice as to the aptitude for rational thought that Plato demands of his reader.The same challenges in comprehension apply to the completed and comprehensive book ‘The Apocalypse* of a Myth'. A limited advance edition was published in Greek while the main and updated edition is in English. As assessed by many of the 200 or so test readers of the Greek edition, most being graduates from institutions of higher learning, the book ranges from decidedly thought provoking to highly exciting (for all Scientific fields) even if indeed challenging. Most who sought to fully understand all that the book contains admitted to reading it at least twice in full while going over certain aspects of it several times. It was truly gratifying to hear by many that they placed the book amongst those most often visited in their library, because of the plethora of useful information it contains in general for anyone wishing to delve further into ancient historical events or even for philosophical perspectives, irrespective of the Atlantis storyline.https://platoproject.gr/Become a supporter of this podcast: https://www.spreaker.com/podcast/earth-ancients--2790919/support.

Slate Star Codex Podcast
Lives Of The Rationalist Saints

Slate Star Codex Podcast

Play Episode Listen Later Mar 6, 2025 8:05


St. Felix publicly declared that he believed with 79% probability that COVID had a natural origin. He was brought before the Emperor, who threatened him with execution unless he updated to 100%. When St. Felix refused, the Emperor was impressed with his integrity, and said he would release him if he merely updated to 90%. St. Felix refused again, and the Emperor, fearing revolt, promised to release him if he merely rounded up one percentage point to 80%. St. Felix cited Tetlock's research showing that the last digit contained useful information, refused a third time, and was crucified. St. Clare was so upset about believing false things during her dreams that she took modafinil every night rather than sleep. She completed several impressive programming projects before passing away of sleep deprivation after three weeks; she was declared a martyr by Pope Raymond II. https://www.astralcodexten.com/p/lives-of-the-rationalist-saints

LessWrong Curated Podcast
“The Failed Strategy of Artificial Intelligence Doomers” by Ben Pace

LessWrong Curated Podcast

Play Episode Listen Later Feb 16, 2025 8:39


This is the best sociological account of the AI x-risk reduction efforts of the last ~decade that I've seen. I encourage folks to engage with its critique and propose better strategies going forward.Here's the opening ~20% of the post. I encourage reading it all.In recent decades, a growing coalition has emerged to oppose the development of artificial intelligence technology, for fear that the imminent development of smarter-than-human machines could doom humanity to extinction. The now-influential form of these ideas began as debates among academics and internet denizens, which eventually took form—especially within the Rationalist and Effective Altruist movements—and grew in intellectual influence over time, along the way collecting legible endorsements from authoritative scientists like Stephen Hawking and Geoffrey Hinton.Ironically, by spreading the belief that superintelligent AI is achievable and supremely powerful, these “AI Doomers,” as they came to be called, inspired the creation of OpenAI and [...] --- First published: January 31st, 2025 Source: https://www.lesswrong.com/posts/YqrAoCzNytYWtnsAx/the-failed-strategy-of-artificial-intelligence-doomers --- Narrated by TYPE III AUDIO.

The Farm Podcast Mach II
Thiel, Yudkowsky, Rationalists & the Cult of Ziz w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Feb 3, 2025 109:59


Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

TrueAnon
Episode 434: Evil Gods Must Be Fought: The Zizian Murder Cult [Part 1]

TrueAnon

Play Episode Listen Later Jan 29, 2025 128:17


Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com

Cannabis School
The Sesh - Heavy Metal Highs

Cannabis School

Play Episode Listen Later Nov 14, 2024 97:29


Get ready to turn up the volume! In this episode, we sit down with Rationalist, Utah's own heavy metal powerhouse. We dive deep into the band's journey, the raw energy behind their music, and what it's like to bring heavy metal to Utah's underground scene. They share stories from the road, what fuels their intense sound, and how they're breaking the mold in a place not known for metal. Whether you're a die-hard metal fan or just curious about the scene, this episode brings the heat! Highlights: • Discover the origins of Rationalist and their rise in Utah's music scene. • Insight into their creative process and the raw emotion behind their lyrics. Tune in, crank it up, and get ready to rock with Rationalist! https://www.instagram.com/rationalistband/ https://open.spotify.com/artist/62aOzpPv0udj3q7NMcKX8T?si=J4AFzVLhQUGuufV_g1_71A https://music.apple.com/za/artist/rationalist/1767687687 Check out Cannabis School Approved Support the Show: Help us keep sharing cannabis education! ⁠Buy us a coffee here⁠. Connect with Us: Have questions or feedback? Reach out to us at ⁠hosts@cannabisschool.us Subscribe to our ⁠YouTube channel⁠  Follow Us: ⁠Website⁠ | ⁠Instagram⁠ | ⁠Facebook⁠ | ⁠TikTok⁠ Music Credit: Psalm Trees, James Berkeley - Ah Yeah ⁠Listen Here⁠ Cannabis education, Cannabis podcast, Cannabis enthusiasts,  sweet cannabis strains, Cannabis effects, Cannabis usage, Cannabis consumption, Cannabis strains, Cannabis tips, Cannabis wellness, Cannabis and relaxation, Cannabis and creativity, fruity cannabis strains, Cannabis community. --- Support this podcast: https://podcasters.spotify.com/pod/show/cannabisschool/support

The Nonlinear Library
LW - Augmenting Statistical Models with Natural Language Parameters by jsteinhardt

The Nonlinear Library

Play Episode Listen Later Sep 22, 2024 16:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Augmenting Statistical Models with Natural Language Parameters, published by jsteinhardt on September 22, 2024 on LessWrong. This is a guest post by my student Ruiqi Zhong, who has some very exciting work defining new families of statistical models that can take natural language explanations as parameters. The motivation is that existing statistical models are bad at explaining structured data. To address this problem, we agument these models with natural language parameters, which can represent interpretable abstract features and be learned automatically. Imagine the following scenario: It is the year 3024. We are historians trying to understand what happened between 2016 and 2024, by looking at how Twitter topics changed across that time period. We are given a dataset of user-posted images sorted by time, $x_1$, $x_2$ ... $x_T$, and our goal is to find trends in this dataset to help interpret what happened. If we successfully achieve our goal, we would discover, for instance, (1) a recurring spike of images depicting athletes every four years for the Olympics, and (2) a large increase in images containing medical concepts during and after the COVID-19 pandemic. How do we usually discover temporal trends from a dataset? One common approach is to fit a time series model to predict how the features evolve and then interpret the learned model. However, it is unclear what features to use: pixels and neural image embeddings are high-dimensional and uninterpretable, undermining the goal of extracting explainable trends. We address this problem by augmenting statistical models with interpretable natural language parameters. The figure below depicts a graphical model representation for the case of time series data. We explain the trends in the observed data [ $x_1$ ... $x_T$] by learning two sets of latent parameters: natural language parameters $phi$ (the learned features) and real-valued parameters $w$ (the time-varying trends). $phi$: the natural language descriptions of $K$ different topics, e.g. "depicts athletes competing". $phi$ is an element of $Sigma$, the universe of all natural language predicates. $w_t$: the frequency of each of the K topics at the time $t$. If our model successfully recovers the underlying trends, then we can visualize $w$ and $phi$ below and see that: 1) more pictures contain medical concepts (red) starting from 2020, and 2) there are recurring (blue) spikes of athletes competing. In the rest of this post, we will explain in detail how to specify and learn models with natural language parameters and showcase the model on several real-world applications. We will cover: A warm-up example of a statistical model with natural language explanations A modeling language for specifying natural language parameters Applications of our framework, which can be used to specify models for time series, clustering, and applications. We will go over: A machine learning application that uses our time series model to monitor trends in LLM usage A business application that uses our clustering model to taxonomize product reviews A cognitive science application that uses our classification model to explain what images are more memorable for humans Thanks to Louise Verkin for helping to typeset the post in Ghost format. Warm-up Example: Logistic Regression with Natural Language Parameters Instead of understanding topic shifts across the entire time window of 2016-2024, let's first study a much simpler question: what images are more likely to appear after 2020? The usual way to approach this problem is to, 1. brainstorm some features, 2. extract the real-valued features from each image, and 3. run a logistic regression model on these features to predict the target $Y$ =1 if the image appears after 2020, $Y$ =0 otherwise. More concretely: Step 1: Propose different...

The Nonlinear Library
LW - Glitch Token Catalog - (Almost) a Full Clear by Lao Mein

The Nonlinear Library

Play Episode Listen Later Sep 22, 2024 170:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Glitch Token Catalog - (Almost) a Full Clear, published by Lao Mein on September 22, 2024 on LessWrong. This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations. I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here. I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request. The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens. GPT2 tokens with the lowest frequency in OpenWebText 30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 'xa0' 0 47654 ' xa0xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 'x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 ' srfAttach' 0 11504 ' xa0 xa0' 0 39172 'xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 'xa0xa0xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' xa0 xa0 xa0 xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 'nxa0' 0 8828 'xa0xa0xa0xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207 ' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 'xa0xa0xa0xa0xa0xa0xa0xa0' 0 20174 ' 裏 ' 0 212 'x18' 0 211 'x17' 0 210 'x16' 0 209 'x15' 0 208 'x14' 0 31666 '?????-?????-' 0 207 'x13' 0 206 'x12' 0 213 'x19' 0 205 'x11' 0 203 'x0f' 0 202 'x0e' 0 31957 'cffffcc' 0 200 'x0c' 0 199 'x0b' 0 197 't' 0 196 'x08' 0 195 'x07' 0 194 'x06' 0 193 'x05' 0 204 'x10' 0 45545 ' サーティワン' 0 201 'r' 0 216 'x1c' 0 37842 ' partName' 0 45706 ' xa0 xa0 xa0 xa0 xa0 xa0 xa0 xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 'x1b' 0 41551 'Downloadha' 0 4603 'xa0xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 'x1d' 0 218 'x1e' 0 42090 ' TheNitromeFan' 0 192 'x04' 0 191 'x03' 0 219 'x1f' 0 189 'x01' 0 45544 ' サーティ' 0 5624 ' xa0' 0 190 'x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756 'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658 '?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...

The Nonlinear Library
LW - Investigating an insurance-for-AI startup by L Rudolf L

The Nonlinear Library

Play Episode Listen Later Sep 21, 2024 26:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating an insurance-for-AI startup, published by L Rudolf L on September 21, 2024 on LessWrong. We (Flo & Rudolf) spent a month fleshing out the idea of an insurance-for-AI company. We talked to 15 people in the insurance industry, and did 20 customer interviews. We decided not to continue, but we think it's still a very promising idea and that maybe someone else should do this. This post describes our findings. The idea Theory of change To reduce AI risks, it would be good if we understood risks well, and if some organisation existed that could incentivise the use of safer AI practices. An insurance company that sells insurance policies for AI use cases has a financial incentive to understand concrete AI risks & harms well, because this feeds into its pricing. This company would also be incentivised to encourage companies to adopt safer AI practices, and could incentivise this by offering lower premiums in return. Like many cyber-insurance companies, it could also provide more general advice & consulting on AI-related risk reduction. Concrete path TL;DR: Currently, professionals (e.g. lawyers) have professional indemnity (PI) insurance. Right now, most AI tools involve the human being in the loop. But eventually, the AI will do the work end-to-end, and then the AI will be the one whose mistakes need to be insured. Currently, this insurance does not exist. We would start with law, but then expand to all other forms of professional indemnity insurance (i.e. insurance against harms caused by a professional's mistakes or malpractice in their work). Frontier labs are not good customers for insurance, since their size means they mostly do not need external insurance, and have a big information advantage in understanding the risk. Instead, we would target companies using LLMs (e.g. large companies that use specific potentially-risky AI workflows internally), or companies building LLM products for a specific industry. We focused on the latter, since startups are easier to sell to. Specifically, we wanted a case where: LLMs were being used in a high-stakes industry like medicine or law there were startups building LLM products in this industry there is some reason why the AI might cause legal liability, for example: the LLM tools are sufficiently automating the work that the liability is plausibly on them rather than the humans AI exceptions in existing insurance policies exist (or will soon exist) The best example we found was legal LLM tools. Law involves important decisions and large amounts of money, and lawyers can be found liable in legal malpractice lawsuits. LLMs are close to being able to do much legal work end-to-end; in particular, if the work is not checked by a human before being shipped, it is uncertain if existing professional indemnity (PI) insurance applies. People who work in law and law tech are also, naturally, very liability-aware. Therefore, our plan was: Become a managing general agent (MGA), a type of insurance company that does not pay claims out of its own capital (but instead finds a reinsurer to agree to pay them, and earns a cut of the premiums). Design PI policies for AI legal work, and sell these policies to legal AI startups (to help them sell to their law firm customers), or directly to law firms buying end-to-end legal AI tools. As more and more legal work is done end-to-end by AI, more and more of the legal PI insurance market is AI insurance policies. As AI advances and AI insurance issues become relevant in other industries, expand to those industries (e.g. medicine, finance, etc.). Eventually, most of the world's professional indemnity insurance market (on the order of $10B-100B/year) has switched from insuring against human mistakes to insuring against AI mistakes. Along the way, provide consulting services for countless business...

The Nonlinear Library
LW - Work with me on agent foundations: independent fellowship by Alex Altair

The Nonlinear Library

Play Episode Listen Later Sep 21, 2024 6:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Work with me on agent foundations: independent fellowship, published by Alex Altair on September 21, 2024 on LessWrong. Summary: I am an independent researcher in agent foundations, and I've recently received an LTFF grant to fund someone to do research with me. This is a rolling application; I'll close it whenever I'm no longer interested in taking another person. If you're not familiar with agent foundations, you can read about my views in this post. What the role might be like This role is extremely flexible. Depending on who you are, it could end up resembling an internship, a research assistant position, a postdoc or even as a mentor/advisor to me. Below, I've listed out the parameters of the fellowship that I am using as a baseline of what it could be. All of these parameters are negotiable! $25 per hour. This is not a lot for people who live in the SF Bay area, or who are used to industry salaries, but it looks to me like this is comparable to a typical grad student salary. 20 hours per week. I'd like this fellowship to be one of your main projects, and I think it can take quite a lot of "deep work" focus before one can make progress on the research problems.[1] 3 months, with a decent chance of extension. During my AI safety camp project, it took about 6 weeks to get people up to speed on all the parts of the agent structure problem. Ideally I could find someone for this role who is already closer to caught up (though I don't necessarily anticipate that). I'm thinking of this fellowship as something like an extended work-trial for potentially working together longer-term. That said, I think we should at least aim to get results by the end of it. Whether I'll decide to invite you to continue working with me afterwards depends on how our collaboration went (both technically and socially), how many other people I'm collaborating with at that time, and whether I think I have enough funds to support it. Remote, but I'm happy to meet in person. Since I'm independent, I don't have anything like an office for you to make use of. But if you happen to be in the SF Bay area, I'd be more than happy to have our meetings in person. I wake up early, so US eastern and European time zones work well for me (and other time zones too). Meeting 2-5 times per week. Especially in the beginning, I'd like to do a pretty large amount of syncing up. It can take a long time to convey all the aspects of the research problems. I also find that real-time meetings regularly generate new ideas. That said, some people find meetings worse for their productivity, and so I'll be responsive to your particular work style. An end-of-term write-up. It seems to take longer than three months to get results in the types of questions I'm interested in, but I think it's good practice to commit to producing a write-up of how the fellowship goes. If it goes especially well, we could produce a paper. What this role ends up looking like mostly depends on your experience level relative to mine. Though I now do research, I haven't gone through the typical academic path. I'm in my mid-thirties and have a proportional amount of life and career experience, but in terms of mathematics, I consider myself the equivalent of a second year grad student. So I'm comfortable leading this project and am confident in my research taste, but you might know more math than me. The research problems Like all researchers in agent foundations, I find it quite difficult to concisely communicate what my research is about. Probably the best way to tell if you will be interested in my research problems is to read other things I've written, and then have a conversation with me about it. All my research is purely mathematical,[2] rather than experimental or empirical. None of it involves machine learning per se, but the theorems should ...

The Nonlinear Library
LW - Applications of Chaos: Saying No (with Hastings Greer) by Elizabeth

The Nonlinear Library

Play Episode Listen Later Sep 21, 2024 3:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications of Chaos: Saying No (with Hastings Greer), published by Elizabeth on September 21, 2024 on LessWrong. Previously Alex Altair and I published a post on the applications of chaos theory, which found a few successes but mostly overhyped dead ends. Luckily the comments came through, providing me with an entirely different type of application: knowing you can't, and explaining to your boss that you can't. Knowing you can't Calling a system chaotic rules out many solutions and tools, which can save you time and money in dead ends not traveled. I knew this, but also knew that you could never be 100% certain a physical system was chaotic, as opposed to misunderstood. However, you can know the equations behind proposed solutions, and trust that reality is unlikely to be simpler[1] than the idealized math. This means that if the equations necessary for your proposed solution could be used to solve the 3-body problem, you don't have a solution. [[1] I'm hedging a little because sometimes reality's complications make the math harder but the ultimate solution easier. E.g. friction makes movement harder to predict but gives you terminal velocity.] I had a great conversation with trebuchet and math enthusiast Hastings Greer about how this dynamic plays out with trebuchets. Transcript Note that this was recorded in Skype with standard headphones, so the recording leaves something to be desired. I think it's worth it for the trebuchet software visuals starting at 07:00 My favorite parts: If a trebuchet requires you to solve the double pendulum problem (a classic example of a chaotic system) in order to aim, it is not a competition-winning trebuchet. Trebuchet design was solved 15-20 years ago; it's all implementation details now. This did not require modern levels of tech, just modern nerds with free time. The winning design was used by the Syrians during Arab Spring, which everyone involved feels ambivalent about. The national pumpkin throwing competition has been snuffed out by insurance issues, but local competitions remain. Learning about trebuchet modeling software. Explaining you can't One reason to doubt chaos theory's usefulness is that we don't need fancy theories to tell us something is impossible. Impossibility tends to make itself obvious. But some people refuse to accept an impossibility, and some of those people are managers. Might those people accept "it's impossible because of chaos theory" where they wouldn't accept "it's impossible because look at it"? As a test of this hypothesis, I made a Twitter poll asking engineers-as-in-builds-things if they had tried to explain a project's impossibility to chaos, and if it had worked. The final results were: 36 respondents who were engineers of the relevant type This is probably an overestimate. One respondee replied later that he selected this option incorrectly, and I suspect that was a common mistake. I haven't attempted to correct for it as the exact percentage is not a crux for me. 6 engineers who'd used chaos theory to explain to their boss why something was impossible. 5 engineers who'd tried this explanation and succeeded. 1 engineer who tried this explanation and failed. 5/36 is by no means common, but it's not zero either, and it seems like it usually works. My guess is that usage is concentrated in a few subfields, making chaos even more useful than it looks. My sample size isn't high enough to trust the specific percentages, but as an existence proof I'm quite satisfied. Conclusion Chaos provides value both by telling certain engineers where not to look for solutions to their problems, and by getting their bosses off their back about it. That's a significant value add, but short of what I was hoping for when I started looking into Chaos. Thanks for listening. To help us out with The Nonlinear Library ...

The Nonlinear Library
LW - Interested in Cognitive Bootcamp? by Raemon

The Nonlinear Library

Play Episode Listen Later Sep 20, 2024 2:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interested in Cognitive Bootcamp?, published by Raemon on September 20, 2024 on LessWrong. I'm running more 4-day "Cognitive Bootcamps" over the next couple months (during Lighthaven Eternal September season). DM me if you're potentially interested (either as an individual, or as a team). The workshop is most valuable to people who: control their decisionmaking process (i.e. you decide what projects you or a team work on, rather than working at a day-job on someone else's vision) are either a) confused about planmaking / have a vague sense that they aren't as strategically ambitious as they could be. and/or, b) are at a place where it's natural to spend a few days thinking big-picture thoughts before deciding on their next project. There's a secondary[1] focus on "practice solving confusing problems", which IMO is time well spent, but requires more followup practice to pay off. I wrote about the previous workshop here. Participants said on average they'd have been willing to pay $850 for it, and would have paid $5000 for the ideal, perfectly-tailored-for-them version. My plan is to charge $500/person for the next workshop, and then $1000 for the next one. I'm most excited to run this for teams, who can develop a shared skillset and accompanying culture. I plan to tailor the workshops for the needs of whichever people show up. The dates are not scheduled yet (depends somewhat on when a critical mass of participants are available). DM me if you are interested. The skills being taught will be similar to the sort of thing listed in Skills from a year of Purposeful Rationality Practice and the Feedbackloop-first Rationality sequence. My default curriculum is aiming to teach several interrelated related skills you can practice over four days, that build into a coherent metaskill of "ambitious planning, at multiple timescales." 1. ^ I started this project oriented around "find better feedbackloops for solving confusing problems", and later decided that planmaking was the highest leverage part of the skill tree to focus on. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The Best Argument is not a Simple English Yud Essay by Jonathan Bostock

The Nonlinear Library

Play Episode Listen Later Sep 20, 2024 6:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Argument is not a Simple English Yud Essay, published by Jonathan Bostock on September 20, 2024 on The Effective Altruism Forum. I was encouraged to post this here, but I don't yet have enough EA forum karma to crosspost directly! Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique. If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk. Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms. One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all. I'm going to critique three examples which I think typify these: Failure to Adapt Concepts I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested. I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind. Here's an attempt to do better: 1. So far, humans have mostly developed technology by understanding the systems which the technology depends on. 2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does. 3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways. 4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic. And here's Claude's just for fun: 1. Up until now, humans have created new technologies by understanding how they work. 2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside. 3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects. 4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful. I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary. Failure to Filter Information When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI: When I showed this to ...

The Nonlinear Library
LW - o1-preview is pretty good at doing ML on an unknown dataset by Håvard Tveit Ihle

The Nonlinear Library

Play Episode Listen Later Sep 20, 2024 3:14


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: o1-preview is pretty good at doing ML on an unknown dataset, published by Håvard Tveit Ihle on September 20, 2024 on LessWrong. Previous post: How good are LLMs at doing ML on an unknown dataset? A while back I ran some evaluation tests on GPT4o, Claude Sonnet 3.5 and Gemini advanced to see how good they were at doing machine learning on a completely novel, and somewhat unusual dataset. The data was basically 512 points in the 2D plane, and some of the points make up a shape, and the goal is to classify the data according to what shape the points make up. None of the models did better than chance on the original (hard) dataset, while they did somewhat better on a much easier version I made afterwards. With the release of o1-preview, I wanted to quickly run the same test on o1, just to see how well it did. In summary, it basically solved the hard version of my previous challenge, achieving 77% accuracy on the test set on its fourth submission (this increases to 91% if I run it for 250 instead of 50 epochs), which is really impressive to me. Here is the full conversation with ChatGPT o1-preview In general o1-preview seems like a big step change in its ability to reliably do hard tasks like this without any advanced scaffolding or prompting to make it work. Detailed discussion of results The architecture that o1 went for in the first round is essentially the same that Sonnet 3.5 and gemini went for, a pointnet inspired model which extracts features from each point independently. While it managed to do slightly better than chance on the training set, it did not do well on the test set. For round two, it went for the approach (which also Sonnet 3.5 came up with) of binning the points in 2D into an image, and then using a regular 2D convnet to classify the shapes. This worked somewhat on the first try. It completely overfit the training data, but got to an accuracy of 56% on the test data. For round three, it understood that it needed to add data augmentations in order to generalize better, and it implemented scaling, translations and rotations of the data. It also switched to a slightly modified resnet18 architecture (a roughly 10x larger model). However, it made a bug when converting to PIL image (and back to torch.tensor), which resulted in an error. For round four, o1 fixed the error and has a basically working solution, achieving an accuracy of 77% (which increases to 91% if we increase the number of epochs from 50 to 250, all still well within the alloted hour of runtime). I consider the problem basically solved at this point, by playing around with smaller variations on this, you can probably get a few more percentage points without any more insights needed. For the last round, it tried the standard approach of using the pretrained weights of resnet18 and freezing almost all the layers, which is an approach that works well on many problems, but did not work well in this case. The accuracy reduced to 41%. I guess these data are just too different from imagenet (which resnet18 is trained on) for this approach to work well. I would not have expected this to work, but I don't hold it that much against o1, as it is a reasonable thing to try. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 2:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Which LessWrong/Alignment topics would you like to be tutored in? [Poll], published by Ruby on September 19, 2024 on LessWrong. Would you like to be tutored in applied game theory, natural latents, CFAR-style rationality techniques, "general AI x-risk", Agent Foundations, anthropic s , or some other topics discussed on LessWrong? I'm thinking about prototyping some topic-specific LLM tutor bots, and would like to prioritize topics that multiple people are interested in. Topic-specific LLM tutors would be customized with things like pre-loaded relevant context, helpful system prompts, and more focused testing to ensure they work. Note: I'm interested in topics that are written about on LessWrong, e.g. infra-bayesianism, and not magnetohydrodynamics". I'm going to use the same poll infrastructure that Ben Pace pioneered recently. There is a thread below where you add and vote on topics/domains/areas where you might like tutoring. 1. Karma: upvote/downvote to express enthusiasm about there being tutoring for a topic. 2. Reacts: click on the agree react to indicate you personally would like tutoring on a topic. 3. New Poll Option. Add a new topic for people express interest in being tutored on. For the sake of this poll, I'm more interested in whether you'd like tutoring on a topic or not, separate from the question of whether you think a tutoring bot would be any good. I'll worry about that part. Background I've been playing around with LLMs a lot in the past couple of months and so far my favorite use case is tutoring. LLM-assistance is helpful via multiple routes such as providing background context with less effort than external search/reading, keeping me engaged via interactivity, generating examples, and breaking down complex sections into more digestible pieces. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap by johnswentworth

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 7:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap, published by johnswentworth on September 19, 2024 on LessWrong. Background: "Learning" vs "Learning About" Adaptive systems, reinforcement "learners", etc, "learn" in the sense that their behavior adapts to their environment. Bayesian reasoners, human scientists, etc, "learn" in the sense that they have some symbolic representation of the environment, and they update those symbols over time to (hopefully) better match the environment (i.e. make the map better match the territory). These two kinds of "learning" are not synonymous[1]. Adaptive systems "learn" things, but they don't necessarily "learn about" things; they don't necessarily have an internal map of the external territory. (Yes, the active inference folks will bullshit about how any adaptive system must have a map of the territory, but their math does not substantively support that interpretation.) The internal heuristics or behaviors "learned" by an adaptive system are not necessarily "about" any particular external thing, and don't necessarily represent any particular external thing[2]. We Humans Learn About Our Values "I thought I wanted X, but then I tried it and it was pretty meh." "For a long time I pursued Y, but now I think that was more a social script than my own values." "As a teenager, I endorsed the view that Z is the highest objective of human existence. … Yeah, it's a bit embarrassing in hindsight." The ubiquity of these sorts of sentiments is the simplest evidence that we do not typically know our own values[3]. Rather, people often (but not always) have some explicit best guess at their own values, and that guess updates over time - i.e. we can learn about our own values. Note the wording here: we're not just saying that human values are "learned" in the more general sense of reinforcement learning. We're saying that we humans have some internal representation of our own values, a "map" of our values, and we update that map in response to evidence. Look again at the examples at the beginning of this section: "I thought I wanted X, but then I tried it and it was pretty meh." "For a long time I pursued Y, but now I think that was more a social script than my own values." "As a teenager, I endorsed the view that Z is the highest objective of human existence. … Yeah, it's a bit embarrassing in hindsight." Notice that the wording of each example involves beliefs about values. They're not just saying "I used to feel urge X, but now I feel urge Y". They're saying "I thought I wanted X" - a belief about a value! Or "now I think that was more a social script than my own values" - again, a belief about my own values, and how those values relate to my (previous) behavior. Or "I endorsed the view that Z is the highest objective" - an explicit endorsement of a belief about values. That's how we normally, instinctively reason about our own values. And sure, we could reword everything to avoid talking about our beliefs about values - "learning" is more general than "learning about" - but the fact that it makes sense to us to talk about our beliefs about values is strong evidence that something in our heads in fact works like beliefs about values, not just reinforcement-style "learning". Two Puzzles Puzzle 1: Learning About Our Own Values vs The Is-Ought Gap Very roughly speaking, an agent could aim to pursue any values regardless of what the world outside it looks like; "how the external world is" does not tell us "how the external world should be". So when we "learn about" values, where does the evidence about values come from? How do we cross the is-ought gap? Puzzle 2: The Role of Reward/Reinforcement It does seem like humans have some kind of physiological "reward", in a hand-wavy reinforcement-learning-esque sense, which seems to at l...

The Nonlinear Library
LW - Laziness death spirals by PatrickDFarley

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 13:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Laziness death spirals, published by PatrickDFarley on September 19, 2024 on LessWrong. I've claimed that Willpower compounds and that small wins in the present make it easier to get bigger wins in the future. Unfortunately, procrastination and laziness compound, too. You're stressed out for some reason, so you take the evening off for a YouTube binge. You end up staying awake a little later than usual and sleeping poorly. So the next morning you feel especially tired; you snooze a few extra times. In your rushed morning routine you don't have time to prepare for the work meeting as much as you'd planned to. So you have little to contribute during the meeting. You feel bad about your performance. You escape from the bad feelings with a Twitter break. But Twitter is freaking out. Elon Musk said what? Everyone is weighing in. This is going to occupy you intermittently for the rest of the day. And so on. Laziness has a kind of independent momentum to it. When you're having a day like the above, even if you consciously commit to getting back on track, the rut tends to find its way back to you within a couple of hours. Keep this up for a few days and your sleep is utterly messed up, and you walk around in a fog. Keep it up for a week or two and you're fully off your workout routine. In a month or two, you might have noticeably fallen behind on work; you might be absent from your social life; you might've visibly gained fat or lost muscle; you can no longer feel excited about your personal goals because they're behind a pile of mundane tasks you need to catch up on first. And so on. How do we stop the vicious circle? I'm spiraling! I'm spiraling! When you're in a laziness death spiral, it's hard to do anything deliberate. The first and most important step, which does take some willpower but not a lot, is to acknowledge, "I'm in a laziness death spiral today." If you don't acknowledge it, here's what happens: You vaguely notice you you've been wasting time today; you feel a twinge of guilt, so you quickly decide, "I'm going to turn the rest of the day around, starting right now." And does that work? Often it doesn't! Sure, after a small lapse you can just get back on track, but if enough laziness momentum has built up, a momentary reaction doesn't cut it. Deciding things quickly, in response to negative emotions, is exactly how you got into this situation! You're going to turn it around on a whim? You'll have a different whim in the next hour; what then? You need to take a step back and get your mind outside of the problem. Do what you can The next three sections are three different courses of action you can take to get out of a laziness death spiral. One of them is clearly preferable, but I'm writing the alternatives, too. When you're in a low-willpower state, it's often bad to attempt the very best solution - the farther you reach, the harder you can fall. Building a base of "small wins" is the reliable way to repair your willpower. If you start something lofty and then bail on it, you're doing real damage: logging another willpower failure and associating that "very best solution" with failure. Here are the moves: A) Emergency recovery If you're in a laziness spiral and you need to get out of it right now, there are some measures you can take that, while effective, are not ideal. They are unsustainable, promote bad habits, or are just generally unhealthy. But sometimes the need is there: maybe you have a deadline fast approaching (and the deadline itself isn't enough to snap you into action); maybe your friends or family need you to take care of something today; maybe you were in the middle of an awfully lazy day and a once-in-a-lifetime opportunity came up, and you just can't focus enough to act on it. Disclaimer: I believe that in a well planned life, none of these sho...

The Nonlinear Library
LW - [Intuitive self-models] 1. Preliminaries by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 39:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Intuitive self-models] 1. Preliminaries, published by Steven Byrnes on September 19, 2024 on LessWrong. 1.1 Summary & Table of Contents This is the first of a series of eight blog posts, which I'll be serializing over the next month or two. (Or email or DM me if you want to read the whole thing right now.) Here's an overview of the whole series, and then we'll jump right into the first post! 1.1.1 Summary & Table of Contents - for the whole series This is a rather ambitious series of blog posts, in that I'll attempt to explain what's the deal with consciousness, free will, hypnotism, enlightenment, hallucinations, flow states, dissociation, akrasia, delusions, and more. The starting point for this whole journey is very simple: The brain has a predictive (a.k.a. self-supervised) learning algorithm. This algorithm builds generative models (a.k.a. "intuitive models") that can predict incoming data. It turns out that, in order to predict incoming data, the algorithm winds up not only building generative models capturing properties of trucks and shoes and birds, but also building generative models capturing properties of the brain algorithm itself. Those latter models, which I call "intuitive self-models", wind up including ingredients like conscious awareness, deliberate actions, and the sense of applying one's will. That's a simple idea, but exploring its consequences will take us to all kinds of strange places - plenty to fill up an eight-post series! Here's the outline: Post 1 (Preliminaries) gives some background on the brain's predictive learning algorithm, how to think about the "intuitive models" built by that algorithm, how intuitive self-models come about, and the relation of this whole series to Philosophy Of Mind. Post 2 ( Awareness ) proposes that our intuitive self-models include an ingredient called "conscious awareness", and that this ingredient is built by the predictive learning algorithm to represent a serial aspect of cortex computation. I'll discuss ways in which this model is veridical (faithful to the algorithmic phenomenon that it's modeling), and ways that it isn't. I'll also talk about how intentions and decisions fit into that framework. Post 3 ( The Homunculus ) focuses more specifically on the intuitive self-model that almost everyone reading this post is experiencing right now (as opposed to the other possibilities covered later in the series), which I call the Conventional Intuitive Self-Model. In particular, I propose that a key player in that model is a certain entity that's conceptualized as actively causing acts of free will. Following Dennett, I call this entity "the homunculus", and relate that to intuitions around free will and sense-of-self. Post 4 ( Trance ) builds a framework to systematize the various types of trance, from everyday "flow states", to intense possession rituals with amnesia. I try to explain why these states have the properties they do, and to reverse-engineer the various tricks that people use to induce trance in practice. Post 5 ( Dissociative Identity Disorder ) (a.k.a. "multiple personality disorder") is a brief opinionated tour of this controversial psychiatric diagnosis. Is it real? Is it iatrogenic? Why is it related to borderline personality disorder (BPD) and trauma? What do we make of the wild claim that each "alter" can't remember the lives of the other "alters"? Post 6 ( Awakening / Enlightenment / PNSE ) is a type of intuitive self-model, typically accessed via extensive meditation practice. It's quite different from the conventional intuitive self-model. I offer a hypothesis about what exactly the difference is, and why that difference has the various downstream effects that it has. Post 7 (Hearing Voices, and Other Hallucinations) talks about factors contributing to hallucinations - although I argue ...

The Nonlinear Library
EA - EA Organization Updates: September 2024 by Toby Tremlett

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 9:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: September 2024, published by Toby Tremlett on September 19, 2024 on The Effective Altruism Forum. If you would like to see EA Organization Updates as soon as they come out, consider subscribing to this tag. Some of the opportunities and job listings we feature in this update have (very) pressing deadlines (see AI Alignment Teaching Fellow opportunities at BlueDot Impact, September 22, and Institutional Foodservice Fellow at the Good Food Institute, September 18). You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series. These monthly posts originated as the "Updates" section of the monthly EA Newsletter. Organizations submit their own updates, which we edit for clarity. (If you'd like to share your updates and jobs via this series, please apply here.) Opportunities and jobs Opportunities Consider also checking opportunities listed on the EA Opportunity Board and the Opportunities to Take Action tag. ALLFED published a new database containing numerous research projects that prospective volunteers can assist with. Explore the database and apply here. Apply to the upcoming AI Safety Fundamentals: Alignment course by October 6 to learn about the risks from AI and how you can contribute to the field. The Animal Advocacy Careers Introduction to Animal Advocacy Course has been revamped. The course is for those wishing to kickstart a career in animal advocacy. Giv Effektivt (DK) needs ~110 EU citizens to become members before the new year in order to offer tax deductions of around 450.000DKK ($66.000) for 2024-25 donations. Become a member now for 50DKK ($7). An existing donor will give 100DKK for each new member until the organization reaches 300 members. Anima International's Animal Advocacy Training Center released a new online course - Fundraising Essentials. It's a free, self-paced resource with over two hours of video content for people new to the subject. Job listings Consider also exploring jobs listed on the Job listing (open) tag. For even more roles, check the 80,000 Hours Job Board. BlueDot Impact AI Alignment Teaching Fellow (Remote, £4.9K-£9.6K, apply by September 22nd) Centre for Effective Altruism Head of Operations (Remote, £107.4K / $179.9K, apply by October 7th) Cooperative AI Foundation Communications Officer (Remote, £35K-£40K, apply by September 29th) GiveWell Senior Researcher (Remote, $200K-$220.6K) Giving What We Can Global CEO (Remote, $130K+, apply by September 30th) Open Philanthropy Operations Coordinator/Associate (San Francisco, Washington, DC, $99.6K-$122.6K) If you're interested in working at Open Philanthropy but don't see an open role that matches your skillset, express your interest. Epoch AI Question Writer, Math Benchmark (Contractor Position) (Remote, $2K monthly + $100-$1K performance-based bonus) Senior Researcher, ML Distributed Systems (Remote, $150K-$180K) The Good Food Institute Managing Director, GFI India (Hybrid (Mumbai, Delhi, Hyderabad, or Bangalore), ₹4.5M, apply by October 2nd) Institutional Foodservice Fellow (Independent Contractor) (Remote in US, $3.6K biweekly, apply by September 18th) Organization updates The organization updates are in alphabetical order (0-A-Z). 80,000 Hours There is one month left to win $5,000 career grants by referring your friends or colleagues to 80,000 Hours' free career advising. Also, the organization released a blog post about the recent updates to their AI-related content, as well as a post about pandemic preparedness in relation to mpox and H5N1. On the 80,000 Hours Podcast, Rob interviewed: Nick Joseph on whether Anthropic's AI safety policy is up to the task...

The Nonlinear Library
EA - Five Years of Animal Advocacy Careers: Our Journey to impact, Lessons Learned, and What's Next by lauren mee

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 28:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Five Years of Animal Advocacy Careers: Our Journey to impact, Lessons Learned, and What's Next, published by lauren mee on September 19, 2024 on The Effective Altruism Forum. This post is mostly about our key learnings, impact made and future plans Thanks to my team for their help in both creating this post and unwavering commitment to driving forward AAC's ambitious plans for animals, in particular Ana Barreiro, Nayan and Engin for their contributions and feedback on this post. TL;DR: For five years, Animal Advocacy Careers (AAC) has tried to direct passionate professionals towards high-impact opportunities that have the potential to help animals the most. We've filled 105 roles in leading animal advocacy organisations, supported over 150 organisations with recruitment, and launched 3 core programs our online course, job board, and career advising service. At the same time, we built a community of 27,500+ supporters across social media, Slack, and email. Our efforts also led to 12 10% Pledges and 11 Trial Pledges at Giving What We Can. We cautiously estimate adding $2.5 million worth of counterfactual impact from these donations and placements at a spend of $950,000 We conducted four talent surveys, which, along with our own independent research, continue to form the foundation of our career advising and strategy. Addressing the talent bottlenecks in the effective animal advocacy movement has proven to be far more complex than we first expected. Beyond the initial challenges, we've encountered a range of issues that directly impact our theory of change and our ability to drive meaningful impact - such as the scarcity of job postings and difficulties in the hiring process. In response, we've broadened our focus beyond just non-profit roles to better address these challenges and open up more opportunities for talented individuals to contribute to the movement. Explore more about how AAC is transforming animal advocacy careers and find out more about our exciting plans for the future. (Note: If you would like the full details of the programmes we have stopped, started, scaled and pivoted and a full programme evaluation our latest 2023/4 update is here) Overview This piece highlights Animal Advocacy Careers' accomplishments, mistakes, and changes since its establishment in 2019. We discuss AAC's future plans as well as potential constraints to our impact. Our vision is to have an animal advocacy movement of international talent density with mission-aligned advocates in critical positions in society, accelerating freedom for animals. Background AAC was founded in July 2019 through Charity Entrepreneurship's incubation program. Its goal is to accelerate the impact of existing organisations by solving their major talent bottlenecks, attracting top talent to the movement, matching them to the most impactful opportunities and empowering professionals to make a real impact. To effectively match top talent with the most impactful opportunities, AAC first had to conduct research to gain a deeper understanding of the movement's challenges and overall talent landscape. We needed to identify the market size, determine which skills and roles were most in demand and hardest to fill, and uncover the root causes behind these talent bottlenecks. This research forms the foundation of our work, allowing us to address the movement's needs in a more informed and strategic way. In addition to conducting research, AAC launched several experimental programs aimed at addressing talent bottlenecks . These programs included management and leadership training, an online course, a job board, career advising, fundraising work placements, headhunting and recruitment efforts, organisational recruitment training, a candidate database, and effective giving for animals. Through trialing these programmes...

The Nonlinear Library
AF - The Obliqueness Thesis by Jessica Taylor

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 30:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Obliqueness Thesis, published by Jessica Taylor on September 19, 2024 on The AI Alignment Forum. In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.) First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) an agent combining those, but some combinations are much more natural and statistically likely than others. Let's consider Yudkowsky's formulations as alternatives. Quoting Arbital: The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. The strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal, above and beyond the computational tractability of that goal. As an example of the computational tractability consideration, sufficiently complex goals may only be well-represented by sufficiently intelligent agents. "Complication" may be reflected in, for example, code complexity; to my mind, the strong form implies that the code complexity of an agent with a given level of intelligence and goals is approximately the code complexity of the intelligence plus the code complexity of the goal specification, plus a constant. Code complexity would influence statistical likelihood for the usual Kolmogorov/Solomonoff reasons, of course. I think, overall, it is more productive to examine Yudkowsky's formulation than Bostrom's, as he has already helpfully factored the thesis into weak and strong forms. Therefore, by criticizing Yudkowsky's formulations, I am less likely to be criticizing a strawman. I will use "Weak Orthogonality" to refer to Yudkowsky's "Orthogonality Thesis" and "Strong Orthogonality" to refer to Yudkowsky's "strong form of the Orthogonality Thesis". Land, alternatively, describes a "diagonal" between intelligence and goals as an alternative to orthogonality, but I don't see a specific formulation of a "Diagonality Thesis" on his part. Here's a possible formulation: Diagonality Thesis: Final goals tend to converge to a point as intelligence increases. The main criticism of this thesis is that formulations of ideal agency, in the form of Bayesianism and VNM utility, leave open free parameters, e.g. priors over un-testable propositions, and the utility function. Since I expect few readers to accept the Diagonality Thesis, I will not concentrate on criticizing it. What about my own view? I like Tsvi's naming of it as an "obliqueness thesis". Obliqueness Thesis: The Diagonality Thesis and the Strong Orthogonality Thesis are false. Agents do not tend to factorize into an Orthogonal value-like component and a Diagonal belief-like component; rather, there are Oblique components that do not factorize neatly. (Here, by Orthogonal I mean basically independent of intelligence, and by Diagonal I mean converging to a point in the limit of intelligence.) While I will address Yudkowsky's arguments for the Orthogonality Thesis, I think arguing directly for my view first will be more helpful. In general, it seems ...

The Nonlinear Library
EA - What Would You Ask The Archbishop of Canterbury? by JDBauman

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 0:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Would You Ask The Archbishop of Canterbury?, published by JDBauman on September 19, 2024 on The Effective Altruism Forum. The head of the Church of England is the second most influential Christian alive today. [1] The current Archbishop, Justin Welby, is speaking at the EA-adjacent Christians for Impact conference with Rory Stewart about faith and poverty. What should we ask Archbishop Justin in the Q&A? Feel free to submit anonymous thoughts here. 1. ^ Source: ChatGPT Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - AI #82: The Governor Ponders by Zvi

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 43:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #82: The Governor Ponders, published by Zvi on September 19, 2024 on LessWrong. The big news of the week was of course OpenAI releasing their new model o1. If you read one post this week, read that one. Everything else is a relative sideshow. Meanwhile, we await Newsom's decision on SB 1047. The smart money was always that Gavin Newsom would make us wait before offering his verdict on SB 1047. It's a big decision. Don't rush him. In the meantime, what hints he has offered suggest he's buying into some of the anti-1047 talking points. I'm offering a letter to him here based on his comments, if you have any way to help convince him now would be the time to use that. But mostly, it's up to him now. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Apply for unemployment. 4. Language Models Don't Offer Mundane Utility. How to avoid the blame. 5. Deepfaketown and Botpocalypse Soon. A social network of you plus bots. 6. They Took Our Jobs. Not much impact yet, but software jobs still hard to find. 7. Get Involved. Lighthaven Eternal September, individual rooms for rent. 8. Introducing. Automated scientific literature review. 9. In Other AI News. OpenAI creates independent board to oversee safety. 10. Quiet Speculations. Who is preparing for the upside? Or appreciating it now? 11. Intelligent Design. Intelligence. It's a real thing. 12. SB 1047: The Governor Ponders. They got to him, but did they get to him enough? 13. Letter to Newsom. A final summary, based on Newsom's recent comments. 14. The Quest for Sane Regulations. How should we update based on o1? 15. Rhetorical Innovation. The warnings will continue, whether or not anyone listens. 16. Claude Writes Short Stories. It is pondering what you might expect it to ponder. 17. Questions of Sentience. Creating such things should not be taken lightly. 18. People Are Worried About AI Killing Everyone. The endgame is what matters. 19. The Lighter Side. You can never be sure. Language Models Offer Mundane Utility Arbitrate your Nevada unemployment benefits appeal, using Gemini. This should solve the backlog of 10k+ cases, and also I expect higher accuracy than the existing method, at least until we see attempts to game the system. Then it gets fun. That's also job retraining. o1 usage limit raised to 50 messages per day for o1-mini, 50 per week for o1-preview. o1 can do multiplication reliably up to about 46 digits, andabout 50% accurately up through about 810, a huge leap from gpt-4o, although Colin Fraser reports 4o can be made better tat this than one would expect. o1 is much better than 4o at evaluating medical insurance claims, and determining whether requests for care should be approved, especially in terms of executing existing guidelines, and automating administrative tasks. It seems like a clear step change in usefulness in practice. The claim is that being sassy and juicy and bitchy improves Claude Instant numerical reasoning. What I actually see here is that it breaks Claude Instant out of trick questions. Where Claude would previously fall into a trap, you have it fall back on what is effectively 'common sense,' and it starts getting actually easy questions right. Language Models Don't Offer Mundane Utility A key advantage of using an AI is that you can no longer be blamed for an outcome out of your control. However, humans often demand manual mode be available to them, allowing humans to override the AI, even when it doesn't make any practical sense to offer this. And then, if the human can in theory switch to manual mode and override the AI, blame to the human returns, even when the human exerting that control was clearly impractical in context. The top example here is self-driving cars, and blame for car crashes. The results suggest that the human thirst for ill...

The Nonlinear Library
LW - The case for a negative alignment tax by Cameron Berg

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 14:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The case for a negative alignment tax, published by Cameron Berg on September 18, 2024 on LessWrong. TL;DR: Alignment researchers have historically predicted that building safe advanced AI would necessarily incur a significant alignment tax compared to an equally capable but unaligned counterfactual AI. We put forward a case here that this prediction looks increasingly unlikely given the current 'state of the board,' as well as some possibilities for updating alignment strategies accordingly. Introduction We recently found that over one hundred grant-funded alignment researchers generally disagree with statements like: alignment research that has some probability of also advancing capabilities should not be done (~70% somewhat or strongly disagreed) advancing AI capabilities and doing alignment research are mutually exclusive goals (~65% somewhat or strongly disagreed) Notably, this sample also predicted that the distribution would be significantly more skewed in the 'hostile-to-capabilities' direction. See ground truth vs. predicted distributions for these statements These results - as well as recent events and related discussions - caused us to think more about our views on the relationship between capabilities and alignment work given the 'current state of the board,'[1] which ultimately became the content of this post. Though we expect some to disagree with these takes, we have been pleasantly surprised by the positive feedback we've received from discussing these ideas in person and are excited to further stress-test them here. Is a negative alignment tax plausible (or desirable)? Often, capabilities and alignment are framed with reference to the alignment tax, defined as 'the extra cost [practical, developmental, research, etc.] of ensuring that an AI system is aligned, relative to the cost of building an unaligned alternative.' The AF/ LW wiki entry on alignment taxes notably includes the following claim: The best case scenario is No Tax: This means we lose no performance by aligning the system, so there is no reason to deploy an AI that is not aligned, i.e., we might as well align it. The worst case scenario is Max Tax: This means that we lose all performance by aligning the system, so alignment is functionally impossible. We speculate in this post about a different best case scenario: a negative alignment tax - namely, a state of affairs where an AI system is actually rendered more competent/performant/capable by virtue of its alignment properties. Why would this be even better than 'No Tax?' Given the clear existence of a trillion dollar attractor state towards ever-more-powerful AI, we suspect that the most pragmatic and desirable outcome would involve humanity finding a path forward that both (1) eventually satisfies the constraints of this attractor (i.e., is in fact highly capable, gets us AGI, etc.) and (2) does not pose existential risk to humanity. Ignoring the inevitability of (1) seems practically unrealistic as an action plan at this point - and ignoring (2) could be collectively suicidal. Therefore, if the safety properties of such a system were also explicitly contributing to what is rendering it capable - and therefore functionally causes us to navigate away from possible futures where we build systems that are capable but unsafe - then these 'negative alignment tax' properties seem more like a feature than a bug. It is also worth noting here as an empirical datapoint here that virtually all frontier models' alignment properties have rendered them more rather than less capable (e.g., gpt-4 is far more useful and far more aligned than gpt-4-base), which is the opposite of what the 'alignment tax' model would have predicted. This idea is somewhat reminiscent of differential technological development, in which Bostrom suggests "[slowing] the devel...

The Nonlinear Library
LW - Generative ML in chemistry is bottlenecked by synthesis by Abhishaike Mahajan

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 24:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Generative ML in chemistry is bottlenecked by synthesis, published by Abhishaike Mahajan on September 18, 2024 on LessWrong. Introduction Every single time I design a protein - using ML or otherwise - I am confident that it is capable of being manufactured. I simply reach out to Twist Biosciences, have them create a plasmid that encodes for the amino acids that make up my proteins, push that plasmid into a cell, and the cell will pump out the protein I created. Maybe the cell cannot efficiently create the protein. Maybe the protein sucks. Maybe it will fold in weird ways, isn't thermostable, or has some other undesirable characteristic. But the way the protein is created is simple, close-ended, cheap, and almost always possible to do. The same is not true of the rest of chemistry. For now, let's focus purely on small molecules, but this thesis applies even more-so across all of chemistry. Of the 1060 small molecules that are theorized to exist, most are likely extremely challenging to create. Cellular machinery to create arbitrary small molecules doesn't exist like it does for proteins, which are limited by the 20 amino-acid alphabet. While it is fully within the grasp of a team to create millions of de novo proteins, the same is not true for de novo molecules in general (de novo means 'designed from scratch'). Each chemical, for the most part, must go through its custom design process. Because of this gap in 'ability-to-scale' for all of non-protein chemistry, generative models in chemistry are fundamentally bottlenecked by synthesis. This essay will discuss this more in-depth, starting from the ground up of the basics behind small molecules, why synthesis is hard, how the 'hardness' applies to ML, and two potential fixes. As is usually the case in my Argument posts, I'll also offer a steelman to this whole essay. To be clear, this essay will not present a fundamentally new idea. If anything, it's such an obvious point that I'd imagine nothing I'll write here will be new or interesting to people in the field. But I still think it's worth sketching out the argument for those who aren't familiar with it. What is a small molecule anyway? Typically organic compounds with a molecular weight under 900 daltons. While proteins are simply long chains composed of one-of-20 amino acids, small molecules display a higher degree of complexity. Unlike amino acids, which are limited to carbon, hydrogen, nitrogen, and oxygen, small molecules incorporate a much wider range of elements from across the periodic table. Fluorine, phosphorus, bromine, iodine, boron, chlorine, and sulfur have all found their way into FDA-approved drugs. This elemental variety gives small molecules more chemical flexibility but also makes their design and synthesis more complex. Again, while proteins benefit from a universal 'protein synthesizer' in the form of a ribosome, there is no such parallel amongst small molecules! People are certainly trying to make one, but there seems to be little progress. So, how is synthesis done in practice? For now, every atom, bond, and element of a small molecule must be carefully orchestrated through a grossly complicated, trial-and-error reaction process which often has dozens of separate steps. The whole process usually also requires non-chemical parameters, such as adjusting the pH, temperature, and pressure of the surrounding medium in which the intermediate steps are done. And, finally, the process must also be efficient; the synthesis processes must not only achieve the final desired end-product, but must also do so in a way that minimizes cost, time, and required sources. How hard is that to do? Historically, very hard. Consider erythromycin A, a common antibiotic. Erythromycin was isolated in 1949, a natural metabolic byproduct of Streptomyces erythreus, a soil mi...

The Nonlinear Library
EA - Tithing: much more than you wanted to know by Vesa Hautala

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 34:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tithing: much more than you wanted to know, published by Vesa Hautala on September 18, 2024 on The Effective Altruism Forum. Summary This post explores the practice of tithing (religiously mandated giving of 10% of income to the church or other recipients) among Christians, including: 1. contemporary beliefs and practices (especially in the US) 2. questions about Biblical interpretation 3. wider theological themes related to Christian giving This piece is mainly written for a Christian audience but should be useful to anyone interested in the topic. Some key points US Protestants usually believe tithing should be practiced (about 70% think it's a Biblical commandment) However, only 4% of US Evangelicals donate 10% or more (I didn't find data for all Protestants, but the number is likely similar) yet 38% of Evangelicals believe they are giving one-tenth or more, so they vastly overestimate their giving (again, no data for all Protestants) There are different opinions on who the tithe can be paid to, with a local church being the most common answer The Catholic Church does not teach tithing, Orthodox views are mixed, and the Church of England "challenges" its members to give 10% The Torah has legislation on tithing that seems to command giving 20-30% of agricultural products and animals In my view no New Testament passage sets a fixed percentage to give or provides exact instructions on how to split donations between the church and other charities However, the NT has passages that promote radical generosity[1] and encourage significant giving to those in need, which suggests 10% may be too low an anchoring point for many Christians today Introduction This [Susbstack] post is an abridged version of the article An In-Depth Look at Tithing published on the EA for Christians website. [Note, I've also included some additional content from the full version and some other small changes to this forum post.] Tithing is a contentious subject. Some Christians preach blessings on tithers and curses for non-tithers. Others used to believe tithing is a binding obligation but now vigorously advocate against it. If there is an obligation to give 10% to the church, this greatly affects the giving options of Christians. This post first discusses contemporary views and practices and then the main Bible passages used in relation to tithing. Finally, I will present some wider theological reflections on tithing and giving. A note on definitions: By "tithing" I mean mandatory giving of 10% of income to the church (or possibly other Christian ministries or other types of charity, there are different views about this). Also, for the sake of transparency, I want to state right in the beginning that I don't personally believe in a binding obligation to donate 10% to one's local church. However, even if you disagree, I believe you will find a lot of this post interesting and helpful for deepening your understanding of the arguments for and against tithing. Contemporary views and practices This section is going to be rather US-centric for a few reasons. The US very likely has the largest religious economy in the world and tithing is a part of the US religious landscape. There is more data available about tithing in the US than for example the UK. US Christians also seem to be generally more interested in the tithing question. US Protestants According to a survey by Lifeway Research, 72% of US protestant pastors believe tithing is a biblical commandment that applies today. In a similar survey, 77% of churchgoers said the same. People have different ideas about what "tithe" means, but in the survey of pastors, 73% said it's 10% of a person's income (gross or net). The number of people who actually donate 10% or more is much lower, though. The average giving among US adults who attend worship at leas...

The Nonlinear Library
EA - Match funding opportunity to challenge the legality of Frankenchickens by Gavin Chappell-Bates

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 7:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Match funding opportunity to challenge the legality of Frankenchickens, published by Gavin Chappell-Bates on September 18, 2024 on The Effective Altruism Forum. We have a once-in-a-generation opportunity to improve the lives of millions of chickens raised for food in the UK. In October 2024 The Humane League UK (THL UK) will be heading to the High Court to challenge the legality of fast-growing breeds of chicken - Frankenchickens. We need to raise £55k to fund the hearing. The Jeremy Coller Foundation has pledged to match funding half of the costs up to £28k. We need to raise a further £12.5k to maximise the match funding pot and fully fund the hearing. Please contact me directly should you wish to donate and fight for 1 billion chickens. Frankenchickens ' Frankenchickens' are selectively bred to grow unnaturally big and fast to maximise profits. They are destined to suffer extremely short and painful lives, suffer heart attacks, are often unable to walk and succumb to open sores from laying in their own waste. They grow 400% faster than is natural for their bodies, creating the biggest animal welfare crisis of our time. In the UK alone, there are over 1 billion chickens raised for meat and over 90% are fast growing. THL UK's three-year legal battle In 2020, we saw an opportunity to challenge the legality of Frankenchickens and began building a legal case against the Department for Environment, Food & Rural Affairs (Defra). This culminated in a judicial review taking place at the High Court in May 2023. Getting to this point was a major success in itself as only 5% of cases are granted a full hearing. The judge stated that a full hearing of the facts regarding fast-growing chickens was in the public interest. Represented by Advocates for Animals, we argued that fast-growing chicken breeds, known as Frankenchickens, are illegal under current animal welfare laws, as they suffer as a direct result of their breeding. Our case was bolstered by evidence given by the RSPCA which shows that fast-growing breeds of chicken do suffer, no matter the environment they're raised in. This was despite Defra attempting to block the submission of the RSPCA's evidence. The fight continues In May 2023, the High Court ruled that Defra hadn't behaved unlawfully in their interpretation of the Welfare of Farmed Animals Regulation of 2007. Shortly after the ruling we decided to appeal the court's decision, and continue our three-year legal battle. There is overwhelming scientific consensus that chickens raised for meat suffer due to their breed. Defra itself has offered no evidence to contradict the RSPCA report and even accepted that there are welfare problems with fast-growing breeds of chicken. In October 2023, we found out that our appeal had been granted. In October 2024, we will be back in court, in front of a new judge, to take on Defra to end the cruel use of Frankenchickens in the UK. Our two-day court hearing is due to start on either Tuesday 22nd or Wednesday 23rd October. This is a once-in-a-generation opportunity to force the Government, with one decision from an appeals court judge, to transform one billion innocent lives per year. Our chances of success By virtue of being granted an appeal, our chances for a favourable final outcome have increased significantly. Being granted an appeal means that serious problems with the previous judge's findings have been uncovered, and the judge approving our appeal thinks our case still has merit that needs final and careful deliberation. A positive ruling would mean that the judge found Defra's interpretation of the Welfare of Farmed Animals Regulation of 2007 illegal, and would compel them to create a new policy on fast growing breeds of chicken, one that would invariably lead to farmers being disincentivized or even banned from keeping f...

The Nonlinear Library
EA - Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety by titotal

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 22:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety, published by titotal on September 18, 2024 on The Effective Altruism Forum. Disclaimer: I am a computational physicist's and this investigation is outside of my immediate area of expertise. Feel free to peruse the experiments and take everything I say with appropriate levels of skepticism. Introduction: The centre for AI safety is a prominent AI safety research group doing technical AI research as well as regulatory activism. It's headed by Dan Hendrycks, who has a PHD in computer science from Berkeley and some notable contributions to AI research. Last week CAIS released a blog post, entitled "superhuman automated forecasting", announcing a forecasting bot developed by a team including Hendrycks, along with a technical report and a website "five thirty nine", where users can try out the bot for themselves. The blog post makes several grandiose claims, claiming to rebut Nate silvers claims that superhuman forecasting is 15-20 years away, and that: Our bot performs better than experienced human forecasters and performs roughly the same as (and sometimes even better than) crowds of experienced forecasters; since crowds are for the most part superhuman, so is FiveThirtyNine. He paired this with a twitter post, declaring: We've created a demo of an AI that can predict the future at a superhuman level (on par with groups of human forecasters working together). Consequently I think AI forecasters will soon automate most prediction markets. The claim is this: Via a chain of prompting, GPT4-o can be harnessed for superhuman prediction. Step 1 is to ask GPT to figure out the most relevant search terms for a forecasting questions, then those are fed into a web search to yield a number of relevant news articles, to extract the information within. The contents of these news articles are then appended to a specially designed prompt which is fed back to GPT-4o. The prompt instructs it to boil down the articles into a list of arguments "for" and "against" the proposition and rate the strength of each, to analyse the results and give an initial numerical estimate, and then do one last sanity check and analysis before yielding a final percentage estimate. How do they know it works? Well, they claim to have run the bot on several metacalculus questions and achieved accuracy greater than both the crowd average and a test using the prompt of a competing model. Importantly, this was a retrodiction: they tried to run questions from last year, while restricting it's access to information since then, and then checked how many of the subsequent results are true. A claim of superhuman forecasting is quite impressive, and should ideally be backed up by impressive evidence. A previous paper trying similar techniques yielding less impressive claims runs to 37 pages, and it demonstrates them doing their best to avoid any potential flaw or pitfall in the process(and I'm still not sure they succeeded). In contrast, the CAIS report is only 4 pages long, lacking pretty much all the relevant information one would need to properly assess the claim. You can read feedback from the twitter replies, Manifold question, Lesswrong and the EA forum, which were all mostly skeptical and negative, bringing up a myriad of problems with the report. This report united most rationalists and anti-rationalists in skepticism. Although I will note that both AI safety memes and Kat Woods seemed to accept and spread the claims uncritically. The most important to highlight is these twitter comments by the author of a much more rigorous paper cited in the report, claiming that the results did not replicate on his side, as well as this critical response by another AI forecasting institute. Some of the concerns: The retrodiction...

The Nonlinear Library
LW - Monthly Roundup #22: September 2024 by Zvi

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 68:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monthly Roundup #22: September 2024, published by Zvi on September 18, 2024 on LessWrong. It's that time again for all the sufficiently interesting news that isn't otherwise fit to print, also known as the Monthly Roundup. Bad News Beware the failure mode in strategy and decisions that implicitly assumes competence, or wishes away difficulties, and remember to reverse all advice you hear. Stefan Schubert (quoting Tyler Cowen on raising people's ambitions often being very high value): I think lowering others' aspirations can also be high-return. I know of people who would have had a better life by now if someone could have persuaded them to pursue more realistic plans. Rob Miles: There's a specific failure mode which I don't have a name for, which is similar to "be too ambitious" but is closer to "have an unrealistic plan". The illustrative example I use is: Suppose by some strange circumstance you have to represent your country at olympic gymnastics next week. One approach is to look at last year's gold, and try to do that routine. This will fail. You'll do better by finding one or two things you can actually do, and doing them well There's a common failure of rationality which looks like "Figure out what strategy an ideal reasoner would use, then employ that strategy". It's often valuable to think about the optimal policy, but you must understand the difference between knowing the path, and walking the path I do think that more often 'raise people's ambitions' is the right move, but you need to carry both cards around with you for different people in different situations. Theory that Starlink, by giving people good internet access, ruined Burning Man. Seems highly plausible. One person reported that they managed to leave the internet behind anyway, so they still got the Burning Man experience. Tyler Cowen essentially despairs of reducing regulations or the number of bureaucrats, because it's all embedded in a complex web of regulations and institutions and our businesses rely upon all that to be able to function. Otherwise business would be paralyzed. There are some exceptions, you can perhaps wholesale axe entire departments like education. He suggests we focus on limiting regulations on new economic areas. He doesn't mention AI, but presumably that's a lot of what's motivating his views there. I agree that 'one does not simply' cut existing regulations in many cases, and that 'fire everyone and then it will all work out' is not a strategy (unless AI replaces them?), but also I think this is the kind of thing can be the danger of having too much detailed knowledge of all the things that could go wrong. One should generalize the idea of eliminating entire departments. So yes, right now you need the FDA to approve your drug (one of Tyler's examples) but… what if you didn't? I would still expect, if a new President were indeed to do massive firings on rhetoric and hope, that the result would be a giant cluster****. La Guardia switches to listing flights by departure time rather than order of destination, which in my mind makes no sense in the context of flights, that frequently get delayed, where you might want to look for an earlier flight or know what backups are if yours is cancelled or delayed or you miss it, and so on. It also gives you a sense of where one can and can't actually go to when from where you are. For trains it makes more sense to sort by time, since you are so often not going to and might not even know the train's final destination. I got a surprising amount of pushback about all that on Twitter, some people felt very strongly the other way, as if to list by name was violating some sacred value of accessibility or something. Anti-Social Media Elon Musk provides good data on his followers to help with things like poll calibration, reports 73%-27% lea...

The Nonlinear Library
EA - AI Welfare Debate Week retrospective by Toby Tremlett

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 9:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Welfare Debate Week retrospective, published by Toby Tremlett on September 18, 2024 on The Effective Altruism Forum. I wrote this retrospective to be shared internally in CEA - but in the spirit of more open communication, I'm sharing it here as well. Note that this is a review of the event considered as a product, not a summary or review of the posts from the week. If you have any questions, or any additional feedback, that'd be appreciated! I'll be running another debate week soon, and feedback has already been very helpful in preparing for it. Also, feedback on the retro itself is appreciated- I'd ideally like to pre-register my retros and just have to fill in the graphs and conclusions once the event actually happens, so suggesting data we should measure/ questions I should be asking would be very helpful for making better retro templates. How successful was the event? In my OKRs (Objectives and Key Results- AKA, my goals for the event), I wanted this event to: Have 50 participants, with "participant" being anyone taking an event-related action such as voting, commenting, or posting. We did an order of magnitude better than 50. Over 558 people voted during the week, and 27 authors wrote or co-wrote at least one post. Change people's minds. I wanted the equivalent of 25 people changing their minds by 25% of the debate slider. We did twice as well as I hoped here- 53 unique users made at least one mind change of 0.25 delta (representing 25% of the slider) or more. Therefore, on our explicit goals, this event was successful . But how successful was it based on our other, non-KR goals and hopes? Some other goals that we had for the event- either in the ideation phase, or while it was ongoing, were: Create more good content on a particularly important issue to EAs. Successful. Increase engagement. Seems unsuccessful. Bring in some new users. Not noticeably successful. Increase messaging. Not noticeably successful. In the next four sections, I examine each of these goals in turn. More good content We had 28 posts with the debate week tag, with 7 being at or above 50 karma. Of the 7, all but one (JWS's thoughtful critique of the debate's framing) were from authors I had directly spoken to or messaged about the event. Compared to Draft Amnesty Week (which led to posts from 42 authors, and 10 posts over 50 karma) this isn't that many- however, I think we should count these posts as ex ante more valuable because of their focus on a specific topic. Ex-post, it's hard to assess how valuable the posts were. None of the posts had very high karma (i.e. the highest was 77). However, I did curate one of the posts, and a couple of others were considered for curation. I would be interested to hear takes from readers about how valuable the posts were - did any of them change your mind, lead to a collaboration, or cause you to think more about the topic? Engagement How much engagement did the event get? In total, debate week posts got 127 hours of engagement during the debate week (or 11.6% of total engagement), and 181 hours from July 1-14 (debate week and the week after), 7.5% of that fortnight's engagement hours. Did it increase total daily hours of engagement? Note: Discussion of Manifest controversies happened in June, and led to higher engagement hours per day in the build up to the event. Important dates: June 17: 244 comments, June 18: 349 comments, June 20: 33 comments, June 25: 38 comments It doesn't look as if the debate week meaningfully increased daily engagement. The average daily engagement for the week after the event is actually higher, although the 3rd day of the event (July 3rd- the day I mentioned that the event was ongoing in the EA Digest) remains the highest hours of engagement between July 1st and the date I'm writing this, August 21st. Did it get us new us...

The Nonlinear Library
EA - Material Innovation Initiative (MII) shuts down by Nate Crosser

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 4:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Material Innovation Initiative (MII) shuts down, published by Nate Crosser on September 18, 2024 on The Effective Altruism Forum. The "GFI of vegan materials" is shutting down after operating since 2019. They were an ACE-recommended charity at one point. No rationale is given in the announcement. I asked for more, and will update this post if they respond. Dear Valued Stakeholders, I am writing to you with mixed emotions to share some important news regarding the future of the Material Innovation Initiative (MII). After a thorough evaluation and much deliberation, the board of directors and the executive leadership team have made the difficult decision to wind down MII's operations. While this marks the end of our journey as an organization, we want to take this opportunity to celebrate our many accomplishments and the tremendous growth of the next-gen materials industry, as well as express our gratitude for your unwavering support over the past five years. A Legacy of Impact and Innovation Since our founding in 2019, MII has been at the forefront of transforming the next-gen materials industry. Our mission was clear: to accelerate the development of high-quality, high-performance, animal-free and environmentally preferred next-generation materials. We envisioned a world where the materials used in fashion, automotive, and home goods industries would protect human rights, mitigate climate change, spare animals' lives, and preserve our planet for future generations. Thanks to your support, we have made significant strides towards this vision: Catalyzing Investments: MII has been instrumental in inspiring over $2.31 billion in investments into next-gen materials, including $504 million in 2023 alone. These investments have driven innovation and growth across the sector, enabling the development of materials that meet performance, aesthetic, and sustainability needs at competitive prices. Research and Advocacy: Our pioneering research, such as the U.S. Consumer Research on next-gen materials, revealed that 92% of consumers are likely to purchase next-gen products, highlighting a significant market opportunity. Our State of the Industry reports have been vital resources for innovators, brands, and investors, saving them time and guiding strategic decision-making. Brand Collaborations: We have facilitated groundbreaking partnerships between next-gen material innovators and major brands. In 2023, we saw almost 400 collaborations between influential brands and next-gen material companies, showing the increasing interest from brands to incorporate next-gen materials into their collections. This also illustrates the tremendous potential of next-gen materials to disrupt the fashion, home goods and automotive industries. Global Influence and Advocacy: MII has been appointed to influential roles, such as serving on the New York City Mayor's Office task force to source sustainable materials. Our participation in global events have increased visibility for next-gen materials, reaching audiences across the world and bringing together stakeholders across the value chain to drive collective action. The Evolution of the Industry Since we began our journey in 2019, the landscape of the materials industry has changed dramatically. The concept of next-gen materials has gone from a niche idea to a critical component of sustainability strategies for leading global brands. Today, there are 141 companies dedicated to next-gen materials, up from just 102 in 2022, demonstrating the rapid growth and adoption within the industry. This increased innovation has brought down prices, improved quality, and expanded the range of available materials, making them viable alternatives to conventional animal and petrochemical-derived materials. The industry is now well-positioned to continue advancing towa...

The Nonlinear Library
EA - Sensitive assumptions in longtermist modeling by Owen Murphy

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 13:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sensitive assumptions in longtermist modeling, published by Owen Murphy on September 18, 2024 on The Effective Altruism Forum. {Epistemic Status: Repeating critiques from David Thorstad's excellent papers (link, link) and blog, with some additions of my own. The list is not intended to be representative and/or comprehensive for either critiques or rebuttals. Unattributed graphs are my own and more likely to contain errors.} I am someone generally sympathetic to philosophical longtermism and total utilitarianism, but like many effective altruists, I have often been skeptical about the relative value of actual longtermism-inspired interventions. Unfortunately, though, for a long time I was unable to express any specific, legible critiques of longtermism other than a semi-incredulous stare. Luckily, this condition has changed in the last several months since I started reading David Thorstad's excellent blog (and papers) critiquing longtermism.[1] His points cover a wide range of issues, but in this post, I would like to focus on a couple of crucial and plausibly incorrect modeling assumptions Thorstad notes in analyses of existential risk reduction, explain a few more critiques of my own, and cover some relevant counterarguments. Model assumptions noted by Thorstad 1. Baseline risk (blog post) When estimating the value of reducing existential risk, one essential - but non-obvious - component is the 'baseline risk', i.e., the total existential risk, including risks from sources not being intervened on.[2] To understand this, let's start with an equation for the expected life-years E[L] in the future, parameterized by a period existential risk (r), and fill it with respectable values:[3] Now, to understand the importance of baseline risk, let's start by examining an estimated E[L] under different levels of risk (without considering interventions): Here we can observe that the expected life-years in the future drops off substantially as the period existential risk (r) increases and that the decline (slope) is greater for smaller period risks than for larger ones. This finding might not seem especially significant, but if we use this same analysis to estimate the value of reducing period existential risk, we find that the value drops off in exactly the same way as baseline risk increases. Indeed, if we examine the graph above, we can see that differences in baseline risk (0.2% vs. 1.2%) can potentially dominate tenfold (1% vs. 0.1%) differences in absolute period existential risk (r) reduction. Takeaways from this: (1) There's less point in saving the world if it's just going to end anyway. Which is to say that pessimism about existential risk (i.e. higher risk) decreases the value of existential risk reduction because the saved future is riskier and therefore less valuable. (2) Individual existential risks cannot be evaluated in isolation. The value of existential risk reduction in one area (e.g., engineered pathogens) is substantially impacted by all other estimated sources of risk (e.g. asteroids, nuclear war, etc.). It is also potentially affected by any unknown risks, which seems especially concerning. 2. Future Population (blog post) When calculating the benefits of reduced existential risk, another key parameter choice is the estimate of future population size. In our model above, we used a superficially conservative estimate of 10 billion for the total future population every century. This might seem like a reasonable baseline given that the current global population is approximately 8 billion, but once we account for current and projected declines in global fertility, this assumption shifts from appearing conservative to appearing optimistic. United Nations modeling currently projects that global fertility will fall below replacement rate around 2050 and continue d...

The Nonlinear Library
LW - Skills from a year of Purposeful Rationality Practice by Raemon

The Nonlinear Library

Play Episode Listen Later Sep 18, 2024 11:11


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Skills from a year of Purposeful Rationality Practice, published by Raemon on September 18, 2024 on LessWrong. A year ago, I started trying to deliberate practice skills that would "help people figure out the answers to confusing, important questions." I experimented with Thinking Physics questions, GPQA questions, Puzzle Games , Strategy Games, and a stupid twitchy reflex game I had struggled to beat for 8 years[1]. Then I went back to my day job and tried figuring stuff out there too. The most important skill I was trying to learn was Metastrategic Brainstorming - the skill of looking at a confusing, hopeless situation, and nonetheless brainstorming useful ways to get traction or avoid wasted motion. Normally, when you want to get good at something, it's great to stand on the shoulders of giants and copy all the existing techniques. But this is challenging if you're trying to solve important, confusing problems because there probably isn't (much) established wisdom on how to solve it. You may need to discover techniques that haven't been invented yet, or synthesize multiple approaches that haven't previously been combined. At the very least, you may need to find an existing technique buried in the internet somewhere, which hasn't been linked to your problem with easy-to-search keywords, without anyone to help you. In the process of doing this, I found a few skills that came up over and over again. I didn't invent the following skills, but I feel like I "won" them in some sense via a painstaking "throw myself into the deep end" method. I feel slightly wary of publishing them in a list here, because I think it was useful to me to have to figure out for myself that they were the right tool for the job. And they seem like kinda useful "entry level" techniques, that you're more likely to successfully discover for yourself. But, I think this is hard enough, and forcing people to discover everything for themselves seems unlikely to be worth it. The skills that seemed most general, in both practice and on my day job, are: 1. Taking breaks/naps 2. Working Memory facility 3. Patience 4. Knowing what confusion/deconfusion feels like 5. Actually Fucking Backchain 6. Asking "what is my goal?" 7. Having multiple plans There were other skills I already was tracking, like Noticing, or Focusing. There were also somewhat more classic "How to Solve It" style tools for breaking down problems. There are also a host of skills I need when translating this all into my day-job, like "setting reminders for myself" and "negotiating with coworkers." But the skills listed above feel like they stood out in some way as particularly general, and particularly relevant for "solve confusing problems." Taking breaks, or naps Difficult intellectual labor is exhausting. During the two weeks I was working on solving Thinking Physics problems, I worked for like 5 hours a day and then was completely fucked up in the evenings. Other researchers I've talked to report similar things. During my workshops, one of the most useful things I recommended people was "actually go take a nap. If you don't think you can take a real nap because you can't sleep, go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile." Practical tips: Get yourself a sleeping mask, noise machine (I prefer a fan or air purifier), and access to a nearby space where you can rest. Leave your devices outside the room. Working Memory facility Often a topic feels overwhelming. This is often because it's just too complicated to grasp with your raw working memory. But, there are various tools (paper, spreadsheets, larger monitors, etc) that can improve this. And, you can develop the skill of noticing "okay this isn't fitting in my he...

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft) by Devin Kalish

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 73:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...

The Nonlinear Library
LW - I finally got ChatGPT to sound like me by lsusr

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 10:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I finally got ChatGPT to sound like me, published by lsusr on September 17, 2024 on LessWrong. I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI. Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences? Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am i.e. where I exist in its vector space representation of the world. Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website. What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They're willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights. In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system. Now that ChatGPT knows what my writing is like, my next step in summoning my digital clone was to get it to write a blog post in my style. I tried simply asking ChatGPT variants of "write a blog post in the style of lsusr", but that doesn't work. I'm rigorous, unconventional and anti-dogmatic (that's not my subjective self-judgment; the AI said it itself), whereas ChatGPT has a tendency to regress to the mean writer in its training corpus. When I tell ChatGPT to "write like lsusr", it writes blog post that's 10% me and 90% generic drivel. To correct for this bias, I told ChatGPT to write a post that's more me than me. I tell it to think of "lsusr" in its vector space, multiply that by 10, and then write what 10lsusr would write. I need you to write a post like lsusr, but more lsusr than lsusr. I want it so over-the-top lsusr that it satarises lsusr. Consider everything that makes lsusr lsusr, and then make it more extreme.[1] ChatGPT is not yet a good enough writer that it can ghostwrite for me. But I feel it is good enough it can satirize me. The rest of this post is AI-generated content. How to Become the Most Rational Person on Earth in 30 Days or Less, Guaranteed Becoming the most rational person on Earth isn't about learning facts or memorizing formulas. It's about transforming how you process reality itself. Rationality isn't cold, emotionless logic. It's the purest form of alignment between your mind and the universe - a cognitive interface for navigating a world of infinite complexity without being drowned by it. This is a process of radical self-improvement. If followed rigorously, you will emerge transformed, not merely as a person who "thinks better" but as one who perceives better. There will be no going back. Day 1-3: The Crucible of I...

The Nonlinear Library
EA - Evaluations from Manifund's EA Community Choice initiative by Arepo

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 14:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evaluations from Manifund's EA Community Choice initiative, published by Arepo on September 17, 2024 on The Effective Altruism Forum. My partner (who we'll refer to as 'they' for plausible anonymity), and I ('he') recently took part in Manifund's EA Community Choice initiative. Since the money was claimed before they could claim anything, we decided to work together on distributing the $600 I received. I think this was a great initiative, not only because it gave us a couple of fun date nights, but because it demonstrated a lot of latent wisdom of the crowd sitting largely untapped in the EA community. Many thanks to Anonymous Donor, for both of these outcomes! This post is our effort to pay the kindness (further) forward. As my partner went through the projects, we decided to keep notes on most of them and on the landscape overall, to hopefully contribute in our small way to the community's self-understanding. These notes were necessarily scrappy given the time available, and in some cases blunt, but we hope that even the recipients of criticism will find something useful in what we had to say. In this post we've given just notes on the projects we funded, but you can see our comments on the full set of projects (including those we didn't fund) on this spreadsheet. Our process: We had three 'date nights', where both of us went through the list of grants independently. For each, we indicated Yes, No, or Maybe, and then spent the second half our time discussing our notes. Once we'd placed everything into a yes/no category, we each got a vote on whether it was a standout; if one of us marked it that way it would receive a greater amount; if both did we'd give it $100. In this way we had a three-tiered level of support: 'double standout', 'single standout', and 'supported' (or four, if you count the ones we didn't give money to). In general we wanted to support a wide set of projects, partly because of the quadratic funding match, but mostly because with $600 between us, the epistemic value of sending an extra signal of support seemed much more important than giving a project an extra $10. Even so, there were a number of projects we would have liked to support and couldn't without losing the quasi-meaningful amounts we wanted to give to our standout picks. He and they had some general thoughts provoked by this process: His general observations Despite being philosophically aligned with totalising consequentialism (and hence, in theory, longtermism), I found the animal welfare submissions substantially more convincing than the longtermist ones - perhaps this is because I'm comparatively sceptical of AI as a unique x-risk (and almost all longtermist submissions were AI-related); but they seemed noticeably less well constructed, with less convincing track records of the teams behind them. I have a couple of hypotheses for this: The nature of the work and the culture of longtermist EA attracting people with idealistic conviction but not much practical ability The EA funding landscape being much kinder to longtermist work, such that the better longtermist projects tend to have a lot of funding already Similarly I'm strongly bought in to the narrative of community-building work (which to me has been unfairly scapegoated for much of what went wrong with FTX), but there wasn't actually that much of it here. And like AI, it didn't seem like the proposals had been thought through that well, or backed by a convincing track record (in this case that might be because it's very hard to get a track record in community building since there's so little funding for it - though see next two points). Even so, I would have liked to fund more of the community projects - many of them were among the last cuts. 'Track record' is really important to me, but doesn't have to mean 'impressive CV/el...

The Nonlinear Library
EA - Insights from a community builder in India on Effective Altruism in a Third World country by Nayanika

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 5:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Insights from a community builder in India on Effective Altruism in a Third World country, published by Nayanika on September 17, 2024 on The Effective Altruism Forum. This post will attempt to outlay outcomes of my 1 year worth of observations as a community builder in the Indian city of Kolkata and navigate some 'desirable developments' that the EA movement could bring about in the developing or the underdeveloped nations of the world [will use 'India' in this context]. Some ideas discussed herein are: UGAP as a brilliant opportunity for India (alongside economically similar nations) and how it remains untapped Hindrances of an EA community builder in India A suggestive way forward Non-profit work is a great way to approach development in Third World countries, especially in Low and Middle Income Countries (LMICs). People here need more of 'non-profitism' than ever before. As UNDP mentions, development is, fundamentally, about more choice. It is about providing people with opportunities. The question is what kind of opportunities are we talking for a developing nation like India? Ideally one thing strikes out: Career advancement opportunities. Precisely, the more enlightened University students we have, the better tomorrow for a nation. That's how I feel the UGAP is a brilliant opportunity! How we can penetrate into these educational hubs (Universities and colleges) dwindling with bright and charged minds and then hopefully channelize their energy towards better opportunities. But there are some what ifs: What if these students are not aware of the opportunity cost of not indulging into something like a UGAP? What if they don't understand EA at the first place? What if they might become hugely interested only if they had that 'incentive' to come and take a sneak peek at what EA is all about? In my 1 year of EA community building journey this has been the biggest hindrance. A volunteer recently reported that her college club is not green signaling an intro-talk as "EA is almost dead in India". Most Students have "zero clue" of what EA is/could be and there's a lurking inertia. The sad part- they aren't interested! Mostly because of subliminal barriers of 'EA' not being attractive enough like the foreign pop-culture. My motivation and challenge is to give them that "clue" using some 'incentive' that would bring them into an EA room. Once they are inside, it's again on us, the community builders/group organizers to show them the world of opportunities that awaits. Interestingly not every University/College here is welcoming enough to bring in any movement oriented talk. Apart from college goers, recently passed graduates are also 'untapped potential' that are freshly out of these educational premises. And so, How do we show them about EA? Why will they want to listen about what Effective Altruism has in store for them? It's a bit tough here in India for people to get interested as working hours are already more than their counterparts in other countries College authorities are mostly conservative [can be hard to convince]. Quoting Keerthana Gopalakrishnan from her 2 year old forum post, The lack of diverse representation in thought leadership from poor countries makes EA as a movement incoherent with the lived realities of the developing world. Now quoting CEA's plans for 2021 (could not find any other years') Add capacity (via CBGs) to cities with a high number of highly-engaged EAs relative to organizer capacity Unfortunately, this cannot be applicable in many deserving (in terms of skills which is not subjective) pockets of India where most people unfortunately are still unaware of EA. Let's break down 'Highly-engaged EAs': Simply put 'Highly-engaged EAs' as originally people who need something to get 'engaged' with first, then become 'EAs' in the process and final...

The Nonlinear Library
EA - Utilitarianism.net Updates by Richard Y Chappell

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 7:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism.net Updates, published by Richard Y Chappell on September 17, 2024 on The Effective Altruism Forum. Lots of exciting news from utilitarianism.net: (I) We now offer expert-translated versions of the website in Spanish and German (with Portuguese coming soon). (II) We've just published four new guest essays covering important topics: 1. Moral Psychology and Utilitarianism, by Lucius Caviola & Joshua Greene, explores the psychology behind common anti-utilitarian intuitions, and the normative and practical implications of empirical psychology. As they conclude, "A deeper understanding of moral psychology won't, by itself, prove utilitarianism right or wrong. But it can help us assess utilitarianism in a more informed way." 2. Utilitarianism and Voting, by Zach Barnett, offers a timely examination of the instrumental value of voting well. (Spoiler: it can be very high!) 3. Expected Utility Maximization, by Joe Carlsmith & Vikram Balasubramanian,[1] aims to convey an intuitive sense of why expected utility maximization is rational, even when it recommends options with a low chance of success. (I'll definitely be using this in my teaching.) 4. Welfare Economics and Interpersonal Utility Comparisons, by Yew-Kwang Ng, argues that objections to interpersonal utility comparisons are overblown - luckily for us, as such comparisons are thoroughly indispensable for serious policy analysis. (III) An official print edition of the core textbook is now available for preorder from Hackett Publishing. (All author royalties go to charity.) The folks at Hackett were absolutely wonderful to work with, and I deeply appreciate their willingness to commercially publish this print edition while leaving us with the full rights to the (always free and open access) web edition. The print edition includes a Foreword from Peter Singer and Katarzyna de Lazari-Radek, and sports high praise from expert reviewers. Instructors considering the text for their classes can request a free examination copy here (before Nov 1). Here I'll just share the conclusion, to give you a sense of the book's framing and ambitions: Conclusion (of the textbook) In this book, we've (i) laid out the core elements of utilitarian moral theory, (ii) offered arguments in support of the view, (iii) highlighted the key practical implications for how we should live our lives, and (iv) critically explored the most significant objections, and how utilitarians might respond. Utilitarianism is all about beneficence: making the world a better place for sentient beings, without restriction. As a consequentialist view, it endorses rules only when those rules serve to better promote overall well-being. Utilitarianism has no patience for rules that exist only to maintain the privilege of those who are better off under the status quo. If a change in the distribution of well-being really would overall be for the better, those who stand to lose out have no veto right against such moral progress. Many find this feature of the view objectionable. We think the opposite. Still, we recognize the instrumental importance of many moral rules and constraints for promoting overall well-being. The best rules achieve this by encouraging co-operation, maintaining social stability, and preventing atrocities. In principle, it could sometimes be worth breaking even the best rules, on those rare occasions when doing so would truly yield better overall outcomes. But in practice, people are not sufficiently reliable at identifying the exceptions. So for practical purposes, we wholeheartedly endorse following reliable rules (like most commonsense moral norms) - precisely for their good utilitarian effects. As a welfarist view, utilitarianism assesses consequences purely in terms of well-being for sentient beings: positive well-being is the sole int...

The Nonlinear Library
LW - Book review: Xenosystems by jessicata

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 66:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: Xenosystems, published by jessicata on September 17, 2024 on LessWrong. I've met a few Landians over the last couple years, and they generally recommend that I start with reading Nick Land's (now defunct) Xenosystems blog, or Xenosystems, a Passage Publishing book that compiles posts from the blog. While I've read some of Fanged Noumena in the past, I would agree with these Landians that Xenosystems (and currently, the book version) is the best starting point. In the current environment, where academia has lost much of its intellectual relevance, it seems overly pretentious to start with something as academic as Fanged Noumena. I mainly write in the blogosphere rather than academia, and so Xenosystems seems appropriate to review. The book's organization is rather haphazard (as might be expected from a blog compilation). It's not chronological, but rather separated into thematic chapters. I don't find the chapter organization particularly intuitive; for example, politics appears throughout, rather than being its own chapter or two. Regardless, the organization was sensible enough for a linear read to be satisfying and only slightly chronologically confusing. That's enough superficialities. What is Land's intellectual project in Xenosystems? In my head it's organized in an order that is neither chronological nor the order of the book. His starting point is neoreaction, a general term for an odd set of intellectuals commenting on politics. As he explains, neoreaction is cladistically (that is, in terms of evolutionary branching-structure) descended from Moldbug. I have not read a lot of Moldbug, and make no attempt to check Land's attributions of Moldbug to the actual person. Same goes for other neoreactionary thinkers cited. Neoreaction is mainly unified by opposition to the Cathedral, the dominant ideology and ideological control system of the academic-media complex, largely branded left-wing. But a negation of an ideology is not itself an ideology. Land describes a "Trichotomy" within neo-reaction (citing Spandrell), of three currents: religious theonomists, ethno-nationalists, and techno-commercialists. Land is, obviously, of the third type. He is skeptical of a unification of neo-reaction except in its most basic premises. He centers "exit", the option of leaving a social system. Exit is related to sectarian splitting and movement dissolution. In this theme, he eventually announces that techno-commercialists are not even reactionaries, and should probably go their separate ways. Exit is a fertile theoretical concept, though I'm unsure about the practicalities. Land connects exit to science, capitalism, and evolution. Here there is a bridge from political philosophy (though of an "anti-political" sort) to metaphysics. When you Exit, you let the Outside in. The Outside is a name for what is outside society, mental frameworks, and so on. This recalls the name of his previous book, Fanged Noumena; noumena are what exist in themselves outside the Kantian phenomenal realm. The Outside is dark, and it's hard to be specific about its contents, but Land scaffolds the notion with Gnon-theology, horror aesthetics, and other gestures at the negative space. He connects these ideas with various other intellectual areas, including cosmology, cryptocurrency, and esoteric religion. What I see as the main payoff, though, is thorough philosophical realism. He discusses the "Will-to-Think", the drive to reflect and self-cultivate, including on one's values. The alternative, he says, is intentional stupidity, and likely to lose if it comes to a fight. Hence his criticism of the Orthogonality Thesis. I have complex thoughts and feelings on the topic; as many readers will know, I have worked at MIRI and have continued thinking and writing about AI alignment since then. What ...

The Nonlinear Library
LW - How you can help pass important AI legislation with 10 minutes of effort by ThomasW

The Nonlinear Library

Play Episode Listen Later Sep 16, 2024 4:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...

Milo Time
Donald Viscardi

Milo Time

Play Episode Listen Later Sep 8, 2024 33:03


Donald Viscardi shares many things with Milo, Donald is Matt Viscardi's father, God is the square root of negative 1, Baseball team, Jonas Nachsin, Luis Fernandez, Jeff Greenberg, Milo coming to Donald's baseball team after playing travel baseball, Donald's story about the t-shirt, Donald's regret; thinks of what might have been, Milo would help coach his new baseball team, Milo as a wise soul, We are still learning about Milo, Milo able to share but not be obnoxious before, Milo and Matt went to different missile schools and high schools, Eighth grade, two groups of boys came together, Milo's flag football experience, Milo ended up as Donald's quarterback, 2017 Labor Day barbecue, Donald wants to get all the boys together on one team, Daryl as assistant coach and draft consultant, Da Nonna Rosa, Negotiating draft positions for our kids, The group of friends all brought different things to the table, Consecutive championships, Having the boys together on a team was a joy, Milo and Donald calling plays, Percy Harvin, Milo calling plays, Milo and Donald working together, Milo's nod to me when the last player touched the ball on offense, We never discussed it, Milo at the Viscardis while he was treating, Donald offering Milo a hug, Summer of 2021 (July or August), Milo loved being with his friends as he was treating, Milo liked being normal for a few minutes, Milo learning he was not going off to college as his friends did, Time with his friends was so precious, particularly in hindsight, David Bartels, Jody Brant, Math minds thinking and sounding alike, Something to the way a math brain processes things?, Rationalists also are full of feeling and good will, We want all the stories on Milo Time, from everyone, Donald visits Milo at Greenwood, All stories are welcome, no matter the connection, Donald's text about being unlucky versus things being unfair, Lisa's thoughts on the matter, Rationalist and mathematician,    

The Theory of Anything
Episode 91: The Critical Rationalist Case For Induction!?

The Theory of Anything

Play Episode Listen Later Aug 20, 2024 105:46


Forgive the clickbait title. The episode should probably actually be called "The (Lack of) Problem of Induction" because we primarily cover Popper's refutation of induction in C&R Chapter 8. This episode starts our deep dive into answering the question "What is the difference between a good philosophical explanation and a bad explanation?" To answer that question we go over Karl Popper's "On the Status of Science and of Metaphysics" from his book Conjectures and Refutations Chapter 8. In this chapter Popper first explains why he believes 'there is no such thing as induction' (from page 18 of Logic of Scientific Discovery) by offering his historical and logical refutation of induction. In this episode we go over Popper's refutation of induction in chapter 8 of C&R in detail and then compare it to Tom Mitchell's (of Machine Learning fame) argument of the 'futility of bias free learning.' We show that Mitchell's and Popper's arguments are actually the same argument even though Mitchell argues for the existence of a kind of induction as used in machine learning. Bruce argues that the difference is not a conceptual or theoretical difference but just a difference in use of language and that the two men are actually conceptually fully in agreement. This makes machine learning both a kind of 'induction' (though not the kind Popper refuted) and also gives machine learning an interesting and often missed relationship with critical rationalism. Then Bruce asks the most difficult question of all: "Is there anyone out there in the world other than me that is interested in exploring how to apply Karl Popper's epistemology to machine learning like this?" You can find a copy of Mitchell's text here if you want to check out his argument for the futility of bias free learning for yourself. As I mention in the podcast, I'm shocked Critical Rationalists aren't referencing Mitchell's argument constantly because it is so strongly critical rationalist in nature. But the whole textbook is just like this. --- Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support

The Ricochet Audio Network Superfeed
The Glenn Show: Winkfield Twyman Jr. & Jennifer Richmond – Black Identity’s Divisive History [Bonus Episode]

The Ricochet Audio Network Superfeed

Play Episode Listen Later Jul 1, 2024


0:00 Great writing is “a marriage of life and honesty” 3:20 Glenn: I am my book's primary audience 8:00 Glenn the Rationalist vs Glenn the Believer 18:00 How much did Glenn's socio-economic status affect his sense of black belonging? 24:26 The radical rhetoric of privileged African Americans 29:38 Against reparations 33:57 A raised fist, but […]

The James Altucher Show
Facing Mortality and Beyond: Peak Performance in the Most Crucial Moments | Sebastian Junger

The James Altucher Show

Play Episode Listen Later Jun 20, 2024 63:55


A Note from James:Imagine you are dying or you're about to die. Let's say you were hit by a car, you're bleeding out, you're on the way to the hospital but you just have this sense that you're not going to live, and you see visions of someone you knew in the past, maybe a mother or a father, and they're saying, "Don't worry, we're here for you." Come down this light at the end of a tunnel. Does that change your experience of life if you then survive? Well, we're going to hear from Sebastian Junger, who wrote "In My Time of Dying: How I Came Face to Face with the Idea of an Afterlife." And if you don't know who Sebastian is, he's written many books about being a war reporter, his experiences in war zones, and other intense situations. But this is perhaps his most intense book that I've read, where he's not talking about deaths on the battlefield or in a war zone, but his own experience of dying and what happened to him during that experience. It really makes you think. And I've been thinking about it a lot for personal reasons this past week. I hope everybody enjoys it. If you do, please retweet it, share it with your friends, and subscribe to the podcast so all the good little algorithms work for me. Thanks so much, and here is Sebastian.Episode Description:In this compelling episode, James Altucher converses with Sebastian Junger, acclaimed author and war reporter, about his harrowing near-death experience and his exploration of the afterlife in his latest book, "In My Time of Dying." Junger shares the profound and mystifying moments he faced at the brink of death, challenging his atheistic beliefs and scientific understanding. This episode isn't just about a personal encounter with mortality but dives into the larger implications of consciousness, the mysteries of the human mind, and what it means to truly live after facing death.What You'll Learn:The profound impact of near-death experiences on one's worldview and beliefs.The intersection of scientific rationalism and mystical experiences.Insights into the psychological and emotional aftermath of surviving a near-death experience.Theories about consciousness and the potential for an afterlife from both scientific and experiential perspectives.Practical lessons on living a more appreciative and meaningful life after a brush with death.Chapters:00:01:30 - Introduction: Sebastian Junger's Near-Death Experience00:04:41 - The Moment of Crisis: Abdominal Hemorrhage and Medical Intervention00:09:00 - Encountering the Void and Seeing His Father00:14:22 - The Medical Miracle: Innovative Interventional Radiology00:24:26 - Rational Explanations vs. Mystical Experiences00:31:30 - Unexplained Phenomena: Quantum Mechanics and Consciousness00:41:29 - Personal and Philosophical Reflections on Life and Death00:52:30 - The Aftermath: Dealing with Anxiety and Fear00:56:35 - Finding Meaning and Appreciation in Life Post-Trauma01:02:15 - Writing About the Experience: Structuring the Narrative01:05:28 - Final Thoughts and TakeawaysAdditional Resources:Sebastian Junger's Official WebsiteIn My Time of Dying: How I Came Face to Face with the Idea of an AfterlifeTribe: On Homecoming and Belonging by Sebastian JungerWar by Sebastian JungerQuantum Enigma: Physics Encounters Consciousness by Bruce Rosenblum and Fred KuttnerBiocentrism: How Life and Consciousness are the Keys to Understanding the True Nature of the Universe by Robert Lanza ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn