Podcasts about yudkowsky

  • 62PODCASTS
  • 562EPISODES
  • 1h 3mAVG DURATION
  • 1WEEKLY EPISODE
  • May 8, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about yudkowsky

Latest podcast episodes about yudkowsky

Investor Mama
175 | Master Your Money: Strategic Financial Planning Tips |Sophia Yudkowsky, CFP®

Investor Mama

Play Episode Listen Later May 8, 2025


In this episode of the Investor Mama podcast, Certified Financial Planner Sophia Yudkowsky, CFP®, shares expert tips on strategic financial planning to help you take control of your money. Learn how to create a practical budget, reduce debt, and build long-term financial security—even if you're starting from scratch. Sophia breaks down complex personal finance strategies into simple, actionable steps for families and busy moms. Whether you're working on your savings goals, planning for retirement, or just trying to get organized, this episode gives you the tools to succeed. Don't miss this powerful conversation on building a wealth mindset and achieving financial freedom.

The Bayesian Conspiracy
236 – Twilight of the Edgelords with Liam Nolan

The Bayesian Conspiracy

Play Episode Listen Later Apr 30, 2025 107:29


Eneasz and Liam discuss Scott Alexander's post “Twilight of the Edgelords,” an exploration of Truth, Morality, and how one balances love of truth vs not destabilizing the world economy and political regime. CORRECTION: Scott did make an explicitly clear pro … Continue reading →

The Bayesian Conspiracy
235 – Gender Differences, with Wes and Jen

The Bayesian Conspiracy

Play Episode Listen Later Apr 16, 2025 144:14


Wes Fenza and Jen Kesteloot join us to talk about whether there's significant personality differences between men and women, and what (if anything) we should do about that. LINKS Wes's post Men and Women are Not That Different Jacob's quoted … Continue reading →

Money with Mission Podcast
Financial Nesting, Investing, and Partnering: Smart Money Moves for Every Season with Sophia Yudkowsky

Money with Mission Podcast

Play Episode Listen Later Apr 9, 2025 41:40


What truly shapes our money mindset—and how can we reshape it? In this compelling conversation, Dr. Felecia Froe sits down with certified financial planner Sophia Yudkowsky to explore the roots of our beliefs about money, how family culture and early experiences inform our financial habits and the empowering role of objective financial guidance. From navigating first jobs and 401(k)s to preparing financially for significant life events like marriage and children, Sophia shares practical strategies and more profound reflections.    The episode offers a blend of heartfelt storytelling and tactical wisdom, inviting listeners to reframe their relationship with money, get comfortable asking for help, and, ultimately, embrace the power of informed financial planning.   04:07 Sophia's Financial Journey 05:49 Family Culture and Money 10:30 Financial Planning and Early Career 13:12 Preparing for Parenthood 15:19 Investment Strategies and Options 22:03 Building After-Tax Dollar Buckets 22:25 Understanding Roth IRAs 23:03 Concerns About Government Control 23:44 Importance of Diversifying Investments 24:27 Working with Financial Advisors 25:29 Addressing Money Shame 27:43 Financial Planning for Couples 29:57 Choosing the Right Financial Advisor 31:07 Managing 401k Investments 34:19 How Financial Advisors Get Paid 37:42 Financial Nesting for New Parents  

Money with Mission Podcast
Financial Nesting, Investing, and Partnering: Smart Money Moves for Every Season with Sophia Yudkowsky

Money with Mission Podcast

Play Episode Listen Later Apr 9, 2025 41:40


What truly shapes our money mindset—and how can we reshape it? In this compelling conversation, Dr. Felecia Froe sits down with certified financial planner Sophia Yudkowsky to explore the roots of our beliefs about money, how family culture and early experiences inform our financial habits and the empowering role of objective financial guidance. From navigating first jobs and 401(k)s to preparing financially for significant life events like marriage and children, Sophia shares practical strategies and more profound reflections.    The episode offers a blend of heartfelt storytelling and tactical wisdom, inviting listeners to reframe their relationship with money, get comfortable asking for help, and, ultimately, embrace the power of informed financial planning.   04:07 Sophia's Financial Journey 05:49 Family Culture and Money 10:30 Financial Planning and Early Career 13:12 Preparing for Parenthood 15:19 Investment Strategies and Options 22:03 Building After-Tax Dollar Buckets 22:25 Understanding Roth IRAs 23:03 Concerns About Government Control 23:44 Importance of Diversifying Investments 24:27 Working with Financial Advisors 25:29 Addressing Money Shame 27:43 Financial Planning for Couples 29:57 Choosing the Right Financial Advisor 31:07 Managing 401k Investments 34:19 How Financial Advisors Get Paid 37:42 Financial Nesting for New Parents  

The Bayesian Conspiracy
234 – GiveDirectly, with Nick Allardice

The Bayesian Conspiracy

Play Episode Listen Later Apr 2, 2025 112:54


We speak to Nick Allardice, President & CEO of GiveDirectly. Afterwards Steven and Eneasz get wrapped up talking about community altruism for a bit. LINKS Give Directly GiveDirectly Tech Innovation Fact Sheet 00:00:05 – Give Directly with Nick Allardice 01:12:19 … Continue reading →

The Bayesian Conspiracy
233 – AI Policy in D.C., with Dave Kasten

The Bayesian Conspiracy

Play Episode Listen Later Mar 19, 2025 91:01


Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take. LINKS Narrow Path Center for AI Policy Dave Kasten's Essay on the Essay Meta on his Substack … Continue reading →

The Bayesian Conspiracy
Bayes Blast 41 – AI Action Plan

The Bayesian Conspiracy

Play Episode Listen Later Mar 8, 2025 4:33


The White House wants to hear from you regarding what it should do about AI safety. Now's your chance to spend a few minutes to make someone read your thoughts on the subject! Submissions are due by midnight EST on … Continue reading →

The Bayesian Conspiracy
232 – The Milton Friedman Theory of Change, with John Bennett

The Bayesian Conspiracy

Play Episode Listen Later Mar 5, 2025 91:20


John Bennett discusses Milton Friedman‘s model of policy change. LINKS The Milton Friedman Model of Policy Change John Bennett's LinkedIn Friedman's “Capitalism and Freedom” Preface Ross Rheingans-Yoo on Thalidomide at Complex Systems, and at his blog “Every Bay Area Walled … Continue reading →

The Bayesian Conspiracy
Bayes Blast 40 – HPMOR 10 Year Anniversary Parties

The Bayesian Conspiracy

Play Episode Listen Later Feb 21, 2025 11:57


Want to run an HPMOR Anniversary Party, or get notified if one's happening near you? Fill this out!

The Bayesian Conspiracy
231 – SuperBabies, with Gene Smith

The Bayesian Conspiracy

Play Episode Listen Later Feb 19, 2025 108:33


Gene Smith on polygenic screening; gene editing to give our children the happiest, healthiest, best lives they can live; and if we can do this in adults as well. Plus how this will interface with the AI future.   LINKS … Continue reading →

The Bayesian Conspiracy
Bayes Blast 39 – Ladylike for Autistics

The Bayesian Conspiracy

Play Episode Listen Later Feb 5, 2025 22:00


Eneasz tells Jen about Sympathetic Opposition's How and Why to be Ladylike (For Women with Autism), and the podcast takes a 1-episode break

The Farm Podcast Mach II
Thiel, Yudkowsky, Rationalists & the Cult of Ziz w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Feb 3, 2025 109:59


Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

The Bayesian Conspiracy
230 – Making Dating Not Suck with Jacob Falkovich

The Bayesian Conspiracy

Play Episode Listen Later Jan 22, 2025 81:24


Jacob Falkovich on finding a good match and selfless dating LINKS SecondPerson.Dating – why dating sucks and how you will unsuck it Jacob's post on soccer player skill distribution Go Fuck Someone Selfless Dating Consensual Hostility (re consent culture) steelmanning … Continue reading →

The Bayesian Conspiracy
Bayes Blast 38 – Shitcoins

The Bayesian Conspiracy

Play Episode Listen Later Jan 21, 2025 4:08


How shitcoins work, plus the Dumb Money movie about the GameStop squeeze.

The Bayesian Conspiracy
Bayes Blast 37 – Kill Your Friend's Cat

The Bayesian Conspiracy

Play Episode Listen Later Jan 13, 2025 6:31


Why you definitely should kill your friend's cat if you promised to kill your friend's cat. (+Q&A) This is a lightning talk given at the Rationalist MegaMeetup 2024. Based on this Twitter Poll

The Bayesian Conspiracy
229 – Integrating Emotions

The Bayesian Conspiracy

Play Episode Listen Later Jan 8, 2025 86:37


Eric discusses integrating our emotions via observation and adjustment. His free course is at EnjoyExisting.org or email him – eric@ericlanigan.com LINKS EnjoyExisting.org Ugh Fields You Have Two Brains – Eneasz spends more words on this emotion-brain speculation at this blog … Continue reading →

The Bayesian Conspiracy
LW Census 2024

The Bayesian Conspiracy

Play Episode Listen Later Dec 28, 2024 1:07


If you haven't yet, go fill out the 2024 LW Census. Right here.

The Bayesian Conspiracy
228 – The Deep Lore of LightHaven, with Oliver Habryka

The Bayesian Conspiracy

Play Episode Listen Later Dec 24, 2024 126:28


Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here!  LINKS LessWrong LightHaven Oliver's very in-depth post on … Continue reading →

The Bayesian Conspiracy
227 – Longevity, Aimed Communities, Network States, with Zoe Isabel Senon

The Bayesian Conspiracy

Play Episode Listen Later Dec 11, 2024 106:12


We talk to Zoe Isabel Senon about longevity, recent advances, longevity popup cities & group houses, and more (not necessarily in that order). Spoiler: Eneasz is gonna die. 🙁 Also we learn about Network States! LINKS Vitalist Bay Aevitas House … Continue reading →

The Bayesian Conspiracy
226 – The Illusion of Moral Decline

The Bayesian Conspiracy

Play Episode Listen Later Nov 29, 2024 98:49


We discuss Adam Mastroianni's “The Illusion of Moral Decline” LINKS The Illusion of Moral Decline Touchat Wearable Blanket Hoodie Lighthaven – Eternal September Our episode with Adam on The Rise and Fall of Peer Review The Mind Killer Scott Aaronson … Continue reading →

The Stephen Wolfram Podcast
Business, Innovation and Managing Life (November 13, 2024)

The Stephen Wolfram Podcast

Play Episode Listen Later Nov 20, 2024 71:56


Stephen Wolfram answers questions from his viewers about business, innovation, and managing life as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-business-qa Questions include: How long should someone expect to wait before a new business becomes profitable? - In your personal/professional journey, what are the important things that you learned the hard way? - ​​Can you elaborate on some of the unique talents within your team? Perhaps extremely smart or methodical/disciplined people? - Can you tell us about any exciting projects you're working on right now? - What do you think about self-driving? Do you think Tesla's approach without LIDAR has legs or do you think the Google Waymo hardware-intense approach is more promising? - Any tips for building a strong customer base from scratch? - What's the best way to figure out pricing for a new product or service? - With your work on Wolfram|Alpha and other projects, you've brought complex computational abilities to the general public in accessible ways. What were some of the challenges in making such powerful tools user friendly, and how do you think accessibility to high-level technology will shape industries in the future? - If the CEO himself heavily uses the product, you know it's something special. - Stephen, how do you personally define innovation? What makes something truly innovative instead of just a small improvement? - How important are critiques? Which do you find more valuable: positive or negative feedback? - I like real feedback. Pick it apart—that helps in fixing problems/strengthen whatever it is. - I've been rewatching the first hour of your interview with Yudkowsky since yesterday... do you enjoy those types of interactions often? - How do you balance maintaining the integrity of your original idea while incorporating customer feedback, which is often influenced by their familiarity with previous, incomparable solutions? - Do you have a favorite interview/podcast/speech that you've done? Or one that you were most proud of? - Are you aware that with the weekly livestreams, you basically invented THE PERFECT brain workout? - Is there a topic or question you wish more podcast hosts would ask you about that they often overlook? - What is something surprising people may not know about your "day job"? - You have frequently written about your vast digital archive. What tool do you use for indexing and searching? What other tools have you used or considered in the past and what is your opinion about them? With the improving LLMs and RAG, how do you think searching and indexing will change?

The Bayesian Conspiracy
225 – Live at Lighthaven!

The Bayesian Conspiracy

Play Episode Listen Later Nov 14, 2024 158:42


Enjoy the public conversations we had the pleasure of having at our live show at Lighthaven in Berkeley. Special thanks to Andrew, Matt, J, Ben and Garrett! Due to the nature of this recording, it's naturally a bit less refined … Continue reading →

The Rabbi Orlofsky Show
One Year Later - What Can We Learn with Avi Yudkowsky PART 2 (Ep. 258)

The Rabbi Orlofsky Show

Play Episode Listen Later Nov 11, 2024 60:43


Sponsored by Eli & Rena GrayIn appreciation of R' Orlofsky andMarietta Trophy: We offer custom, High-Quality Awards for personal recognition, corporate Awards, and sports.Mention code “Orlofsky24” for a 10% discount till the end of December.Visit our website www.mariettatrophy.com

Machine Learning Street Talk
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Machine Learning Street Talk

Play Episode Listen Later Nov 11, 2024 258:30


Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

The Bayesian Conspiracy
Bonus? The 10-Finger Demon

The Bayesian Conspiracy

Play Episode Listen Later Nov 10, 2024 53:32


A hypothetical about a finger-collecting demon throws Eneasz for a major loop.

The Bayesian Conspiracy
Live Show At Lighthaven

The Bayesian Conspiracy

Play Episode Listen Later Nov 8, 2024 0:49


If you're near Berkeley on 11/13/24 at 4pm, come see us! Address and info at this link. We'll take a few questions from email at bayesianconspiracypodcast@gmail.com please let us know if you're a supporter so we can give extra thanks … Continue reading →

Increments
#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira)

Increments

Play Episode Listen Later Nov 8, 2024 170:58


Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Whether we're concerned about AI doom Bayesian reasoning versus Popperian reasoning Whether it makes sense to put numbers on all your beliefs Solomonoff induction Objective vs subjective Bayesianism Prediction markets and superforecasting References Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/ Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/ Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25). The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/ Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.

The Rabbi Orlofsky Show
On The Frontlines Of October 7th with Avi Yudkowsky PART 1 (Ep. 257)

The Rabbi Orlofsky Show

Play Episode Listen Later Nov 5, 2024 57:15


Sponsored anonymously for all of the mom's out there.

The Bayesian Conspiracy
224 – Cringe Gates and Cultural Drift

The Bayesian Conspiracy

Play Episode Listen Later Oct 30, 2024 101:37


We discuss Eneasz's Shrimp Welfare Watches The EA Gates, briefly touch on Hanson's Cultural Drift, and tackle a lot of follow-ups and feedback. LINKS Shrimp Welfare Watches The EA Gates (also at TwitterX) Cultural Drift Significant Digits is here! From … Continue reading →

The Bayesian Conspiracy
Bayes Blast 36 – AI Grifters in Birmingham

The Bayesian Conspiracy

Play Episode Listen Later Oct 23, 2024 13:08


AskWho just attended a “lecture” by AI Grifter Tania Duarte Slide The “TESCREAL” Bungle  by Ozy Brennan AskWho Casts AI podcast

The Bayesian Conspiracy
223 – Going Back to the Classics

The Bayesian Conspiracy

Play Episode Listen Later Oct 14, 2024 126:50


Classic, season one adventure this week! Eneasz and Steven have a loosely structured conversation about the sequences' value, the virtue of silence, scissor statements, and the value of philosophy. LINKS Cryonics is Free! Dan Dennett – Where Am I? The … Continue reading →

The Bayesian Conspiracy
Bonus – Discord Culture (preview)

The Bayesian Conspiracy

Play Episode Listen Later Oct 14, 2024 16:43


Eneasz chats with TBC Discord member Delta about the cultivation of small online cultures. Get the full episode via our Patreon or our SubStack!  

The Bayesian Conspiracy
Bayes Blast 35 – Disproving The Blindsight Thesis

The Bayesian Conspiracy

Play Episode Listen Later Oct 13, 2024 9:10


GPT-o1 demonstrates the Blindsight thesis is likely wrong. Peter Watts on Blindsight Andrew Cutler on origins of consciousness part 1 and part 2 Thou Art Godshatter

The Bayesian Conspiracy
Bayes Blast 34 – Content Moderation is Infosec

The Bayesian Conspiracy

Play Episode Listen Later Oct 6, 2024 15:29


Steven wanted to share an interesting idea from an article that draws a neat parallel between content moderation and information security. The post discussed here is Como is Infosec.

The Bayesian Conspiracy
222 – Consciousness As Recursive Reflections with Daniel Böttger

The Bayesian Conspiracy

Play Episode Listen Later Oct 2, 2024 119:09


We talk with Daniel about his ACX guest post that posits that thoughts are conscious, rather than brains. LINKS Consciousness As Recursive Reflections Seven Secular Sermons Seven Secular Sermons video on TwitterX LightHaven's Eternal September 0:00:05 – Recursive Reflections 01:29:30 … Continue reading →

The Bayesian Conspiracy
Bayes Blast 33 – Porn Sims

The Bayesian Conspiracy

Play Episode Listen Later Sep 27, 2024 7:48


Can we achieve our true potential? based on – Interview Day At Thiel Capital also mentioned: Meetups Everywhere 2024 Are You Jesus or Hitler

🧠 Let's Talk Brain Health!
Memory Health Matters: Practical Tips & Strategies with Rena Yudkowsky, MSW

🧠 Let's Talk Brain Health!

Play Episode Listen Later Sep 25, 2024 30:29


In this episode of the Let's Talk Brain Health podcast, host Krystal interviews Rena Yudkowsky, a professional memory coach and geriatric social worker. With over 20 years of experience, Rena offers valuable insights on how midlifers and seniors can maintain and improve their memory.  She discusses her journey into gerontology, the importance of memory health, and differentiates between normal age-related memory changes and signs of cognitive decline. The episode also provides practical tips—including the 'Forget Me Not Spot,' mental imagery, and sensory engagement—to enhance memory.  Rena highlights four key lifestyle factors—diet, exercise, social stimulation, and cognitive engagement—that play crucial roles in brain health.  The episode concludes with Rena emphasizing the importance of confidence in aging and some emerging trends in memory enhancement technologies. 00:00 Introduction to the Podcast and Guest 01:06 Rina's Background and Passion for Memory Coaching 02:38 Understanding Memory Health 05:12 Normal vs. Abnormal Memory Changes 09:35 Practical Tips to Boost Memory 18:45 Lifestyle Factors for Memory Health 22:16 Challenges and Overcoming Memory Issues 25:09 Future of Memory Enhancement and Brain Health 26:22 Rapid Fire Questions and Final Thoughts Resources: Learn more about Rena and her work on her website. Join Rena's MPower “brain training Whatsapp group”  Read more about memory in Rena's chapter on memory  in the “Caregivers Advocate” book by Debbie Compton --- Support this podcast: https://podcasters.spotify.com/pod/show/virtualbrainhealthcenter/support

The Nonlinear Library
AF - The Obliqueness Thesis by Jessica Taylor

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 30:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Obliqueness Thesis, published by Jessica Taylor on September 19, 2024 on The AI Alignment Forum. In my Xenosystems review, I discussed the Orthogonality Thesis, concluding that it was a bad metaphor. It's a long post, though, and the comments on orthogonality build on other Xenosystems content. Therefore, I think it may be helpful to present a more concentrated discussion on Orthogonality, contrasting Orthogonality with my own view, without introducing dependencies on Land's views. (Land gets credit for inspiring many of these thoughts, of course, but I'm presenting my views as my own here.) First, let's define the Orthogonality Thesis. Quoting Superintelligence for Bostrom's formulation: Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal. To me, the main ambiguity about what this is saying is the "could in principle" part; maybe, for any level of intelligence and any final goal, there exists (in the mathematical sense) an agent combining those, but some combinations are much more natural and statistically likely than others. Let's consider Yudkowsky's formulations as alternatives. Quoting Arbital: The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. The strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal, above and beyond the computational tractability of that goal. As an example of the computational tractability consideration, sufficiently complex goals may only be well-represented by sufficiently intelligent agents. "Complication" may be reflected in, for example, code complexity; to my mind, the strong form implies that the code complexity of an agent with a given level of intelligence and goals is approximately the code complexity of the intelligence plus the code complexity of the goal specification, plus a constant. Code complexity would influence statistical likelihood for the usual Kolmogorov/Solomonoff reasons, of course. I think, overall, it is more productive to examine Yudkowsky's formulation than Bostrom's, as he has already helpfully factored the thesis into weak and strong forms. Therefore, by criticizing Yudkowsky's formulations, I am less likely to be criticizing a strawman. I will use "Weak Orthogonality" to refer to Yudkowsky's "Orthogonality Thesis" and "Strong Orthogonality" to refer to Yudkowsky's "strong form of the Orthogonality Thesis". Land, alternatively, describes a "diagonal" between intelligence and goals as an alternative to orthogonality, but I don't see a specific formulation of a "Diagonality Thesis" on his part. Here's a possible formulation: Diagonality Thesis: Final goals tend to converge to a point as intelligence increases. The main criticism of this thesis is that formulations of ideal agency, in the form of Bayesianism and VNM utility, leave open free parameters, e.g. priors over un-testable propositions, and the utility function. Since I expect few readers to accept the Diagonality Thesis, I will not concentrate on criticizing it. What about my own view? I like Tsvi's naming of it as an "obliqueness thesis". Obliqueness Thesis: The Diagonality Thesis and the Strong Orthogonality Thesis are false. Agents do not tend to factorize into an Orthogonal value-like component and a Diagonal belief-like component; rather, there are Oblique components that do not factorize neatly. (Here, by Orthogonal I mean basically independent of intelligence, and by Diagonal I mean converging to a point in the limit of intelligence.) While I will address Yudkowsky's arguments for the Orthogonality Thesis, I think arguing directly for my view first will be more helpful. In general, it seems ...

The Bayesian Conspiracy
221 – Why The Field? Why Zombies?? with Liam

The Bayesian Conspiracy

Play Episode Listen Later Sep 18, 2024 148:02


Eneasz tries to understand why someone would posit a Chalmers Field, and brings up the horrifying implications. LINKS Zombies! Zombies? 2 Rash 2 Unadvised (the Terra Ignota analysis podcast) The previous TBC episode, where we first discussed other aspects of this … Continue reading →

The Bayesian Conspiracy
Bayes Blast 32 – Canadian Health Care

The Bayesian Conspiracy

Play Episode Listen Later Sep 16, 2024 5:23


The Nonlinear Library
LW - How to Give in to Threats (without incentivizing them) by Mikhail Samin

The Nonlinear Library

Play Episode Listen Later Sep 13, 2024 9:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to Give in to Threats (without incentivizing them), published by Mikhail Samin on September 13, 2024 on LessWrong. TL;DR: using a simple mixed strategy, LDT can give in to threats, ultimatums, and commitments - while incentivizing cooperation and fair[1] splits instead. This strategy made it much more intuitive to many people I've talked to that smart agents probably won't do weird everyone's-utility-eating things like threatening each other or participating in commitment races. 1. The Ultimatum game This part is taken from planecrash[2][3]. You're in the Ultimatum game. You're offered 0-10 dollars. You can accept or reject the offer. If you accept, you get what's offered, and the offerer gets $(10-offer). If you reject, both you and the offerer get nothing. The simplest strategy that incentivizes fair splits is to accept everything 5 and reject everything < 5. The offerer can't do better than by offering you 5. If you accepted offers of 1, the offerer that knows this would always offer you 1 and get 9, instead of being incentivized to give you 5. Being unexploitable in the sense of incentivizing fair splits is a very important property that your strategy might have. With the simplest strategy, if you're offered 5..10, you get 5..10; if you're offered 0..4, you get 0 in expectation. Can you do better than that? What is a strategy that you could use that would get more than 0 in expectation if you're offered 1..4, while still being unexploitable (i.e., still incentivizing splits of at least 5)? I encourage you to stop here and try to come up with a strategy before continuing. The solution, explained by Yudkowsky in planecrash (children split 12 jellychips, so the offers are 0..12): When the children return the next day, the older children tell them the correct solution to the original Ultimatum Game. It goes like this: When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6. This ensures they can't do any better by offering you an unfair split; but neither do you try to destroy all their expected value in retaliation. It could be an honest mistake, especially if the real situation is any more complicated than the original Ultimatum Game. If they offer you 8:4, accept with probability slightly-more-less than 6/8, so they do even worse in their own expectation by offering you 8:4 than 7:5. It's not about retaliating harder, the harder they hit you with an unfair price - that point gets hammered in pretty hard to the kids, a Watcher steps in to repeat it. This setup isn't about retaliation, it's about what both sides have to do, to turn the problem of dividing the gains, into a matter of fairness; to create the incentive setup whereby both sides don't expect to do any better by distorting their own estimate of what is 'fair'. [The next stage involves a complicated dynamic-puzzle with two stations, that requires two players working simultaneously to solve. After it's been solved, one player locks in a number on a 0-12 dial, the other player may press a button, and the puzzle station spits out jellychips thus divided. The gotcha is, the 2-player puzzle-game isn't always of equal difficulty for both players. Sometimes, one of them needs to work a lot harder than the other.] They play the 2-station video games again. There's less anger and shouting this time. Sometimes, somebody rolls a continuous-die and then rejects somebody's offer, but whoever gets rejected knows that they're not being punished. Everybody is just following the Algorithm. Your notion of fairness didn't match their notion of fairness, and they did what the Algorithm says to do in that case, but ...

The Nonlinear Library
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 7:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Executable philosophy as a failed totalizing meta-worldview, published by jessicata on September 5, 2024 on LessWrong. (this is an expanded, edited version of an x.com post) It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital: Two motivations of "executable philosophy" are as follows: 1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable. 2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications. In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...

The Bayesian Conspiracy
220 – Chalmer's Zombies, with Liam

The Bayesian Conspiracy

Play Episode Listen Later Sep 4, 2024 123:06


We dig into the classic LW post Zombies! Zombies? and talk a lot of philosophy with Liam from the 2 Rash 2 Unadvised podcast. I (Steven) spent a bunch of time trying to export the conversation from Discord that Liam, … Continue reading →

The Nonlinear Library
LW - How to hire somebody better than yourself by lukehmiles

The Nonlinear Library

Play Episode Listen Later Aug 29, 2024 7:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to hire somebody better than yourself, published by lukehmiles on August 29, 2024 on LessWrong. TLDR: Select candidates heterogeneously, then give them all a very hard test, only continue with candidates that do very well (accept that you lose some good ones), and only then judge on interviews/whatever. I'm no expert but I've made some recommendations that turned out pretty well -- maybe like 5 ever. This post would probably be better if I waited 10 years to write it. Nonetheless, I think my method is far better than what most orgs/corps do. If you have had mad hiring success (judging by what your org accomplished) then please comment! Half-remembered versions of Paul Graham's taste thing and Yudkowsky's Vinge's Law have lead some folks to think that judging talent above your own is extremely difficult. I do not think so. Prereqs: It's the kind of position where someone super good at it can generate a ton of value - eg sales/outreach, coding, actual engineering, research, management, ops, ... Lots of candidates are available and you expect at least some of them are super good at the job. You have at least a month to look. It's possible for someone to demonstrate extreme competence at this type of job in a day or two. Your org is trying to do a thing - rather than be a thing. You want to succeed at that thing - ie you don't have some other secret goal. Your goal with hiring people is to do that thing better/faster - ie you don't need more friends or a prestige bump. Your work situation does not demand that you look stand-out competent - ie you don't unemploy yourself if you succeed in hiring well. You probably don't meet the prereqs. You are probably in it for the journey more than the destination; your life doesn't improve if org goals are achieved; your raises depend on you not out-hiring yourself; etc. Don't feel bad - it is totally ok to be an ordinary social creature! Being a goal psycho often sucks in every way except all the accomplished goals. If you do meet the prereqs, then good news, hiring is almost easy. You just need to find people who are good at doing exactly what you need done. Here's the method: Do look at performance (measure it yourself) Accept noise Don't look at anything else (yet) Except that they work hard Do look at performance Measure it yourself. Make up a test task. You need something that people can take without quitting their jobs or much feedback from you; you and the candidate should not become friends during the test; a timed 8-hour task is a reasonable starting point. Most importantly, you must be able to quickly and easily distinguish good results from very good results. The harder the task, the easier it is to judge the success of top attempts. If you yourself cannot complete the task at all, then congratulations, you now have a method to judge talent far above your own. Take that, folk Vinge's law. Important! Make the task something where success really does tell you they'll do the job well. Not a proxy IQ test or leetcode. The correlation is simply not high enough. Many people think they just need to hire someone generally smart and capable. I disagree, unless your org is very large or nebulous. This task must also not be incredibly lame or humiliating, or you will only end up hiring people lacking a spine. (Common problem.) Don't filter out the spines. It can be hard to think of a good test task but it is well worth all the signal you will get. Say you are hiring someone to arrange all your offices. Have applicants come arrange a couple offices and see if people like it. Pretty simple. Say you are hiring someone to build a house. Have contractors build a shed in one day. Ten sheds only cost like 5% of what a house costs, but bad builders will double your costs and timeline. Pay people as much as you can for their time and the...

The Nonlinear Library
LW - What is it to solve the alignment problem? by Joe Carlsmith

The Nonlinear Library

Play Episode Listen Later Aug 27, 2024 91:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it to solve the alignment problem?, published by Joe Carlsmith on August 27, 2024 on LessWrong. People often talk about "solving the alignment problem." But what is it to do such a thing? I wanted to clarify my thinking about this topic, so I wrote up some notes. In brief, I'll say that you've solved the alignment problem if you've: 1. avoided a bad form of AI takeover, 2. built the dangerous kind of superintelligent AI agents, 3. gained access to the main benefits of superintelligence, and 4. become able to elicit some significant portion of those benefits from some of the superintelligent AI agents at stake in (2).[1] The post also discusses what it would take to do this. In particular: I discuss various options for avoiding bad takeover, notably: Avoiding what I call "vulnerability to alignment" conditions; Ensuring that AIs don't try to take over; Preventing such attempts from succeeding; Trying to ensure that AI takeover is somehow OK. (The alignment discourse has been surprisingly interested in this one; but I think it should be viewed as an extreme last resort.) I discuss different things people can mean by the term "corrigibility"; I suggest that the best definition is something like "does not resist shut-down/values-modification"; and I suggest that we can basically just think about incentives for/against corrigibility in the same way we think about incentives for/against other types of problematic power-seeking, like actively seeking to gain resources. I also don't think you need corrigibility to avoid takeover; and I think avoiding takeover should be our focus. I discuss the additional role of eliciting desired forms of task-performance, even once you've succeeded at avoiding takeover, and I modify the incentives framework I offered in a previous post to reflect the need for the AI to view desired task-performance as the best non-takeover option. I examine the role of different types of "verification" in avoiding takeover and eliciting desired task-performance. In particular: I distinguish between what I call "output-focused" verification and "process-focused" verification, where the former, roughly, focuses on the output whose desirability you want to verify, whereas the latter focuses on the process that produced that output. I suggest that we can view large portions of the alignment problem as the challenge of handling shifts in the amount we can rely on output-focused verification (or at least, our current mechanisms for output-focused verification). I discuss the notion of "epistemic bootstrapping" - i.e., building up from what we can verify, whether by process-focused or output-focused means, in order to extend our epistemic reach much further - as an approach to this challenge.[2] I discuss the relationship between output-focused verification and the "no sandbagging on checkable tasks" hypothesis about capability elicitation. I discuss some example options for process-focused verification. Finally, I express skepticism that solving the alignment problem requires imbuing a superintelligent AI with intrinsic concern for our "extrapolated volition" or our "values-on-reflection." In particular, I think just getting an "honest question-answerer" (plus the ability to gate AI behavior on the answers to various questions) is probably enough, since we can ask it the sorts of questions we wanted extrapolated volition to answer. (And it's not clear that avoiding flagrantly-bad behavior, at least, required answering those questions anyway.) Thanks to Carl Shulman, Lukas Finnveden, and Ryan Greenblatt for discussion. 1. Avoiding vs. handling vs. solving the problem What is it to solve the alignment problem? I think the standard at stake can be quite hazy. And when initially reading Bostrom and Yudkowsky, I think the image that built up most prominently i...

The Bayesian Conspiracy
219 – On Excellence, with Tracing Woodgrains

The Bayesian Conspiracy

Play Episode Listen Later Aug 21, 2024 82:26


Inspired by Trace's speech about excellence at VibeCamp 3, Eneasz and Steven speak to Tracing Woodgrains about Excellence and its various aspects LINKS Trace's SubStack Trace's Twitter Wes on TracingWoodgrains as the Nietzschean Superman Gymnastics Then vs Now video Evolution … Continue reading →

The Bayesian Conspiracy
218 – Bentham's Bulldog and the Best Argument for God

The Bayesian Conspiracy

Play Episode Listen Later Aug 7, 2024 134:29


Spurred by comments from a couple of episodes ago, we wanted to make sure we didn't misrepresent Matthew's position and he agreed to come lay it out for us on the show. Check out the links below to dive in … Continue reading →

The Bayesian Conspiracy
217 – Consensual Violence with Rikard

The Bayesian Conspiracy

Play Episode Listen Later Jul 24, 2024 110:09


Rikard joins us to speak about the benefits of consensual violence LINKS Rikard's Substack – Drunken Masterpieces Rikard is @rikardhjort Twitter Defense Against the Dark Arts is just the Dark Arts The Ultimate Self-Defense Championship Armchair Violence The poem Rikard … Continue reading →

The Bayesian Conspiracy
216 – On Dying (And Cryonics)

The Bayesian Conspiracy

Play Episode Listen Later Jul 10, 2024 148:27


Rachel Zuber joins us to talk about hospice work, dying in America, and why she's less optimistic about cryonics now. LINKS All the Living and the Dead Caitlin Doughty on YouTube CI cryonics case reports Places of Death in the … Continue reading →