Fluidity

Follow Fluidity
Share on
Copy link to clipboard

After the collapse of the 20th-century systematic mode of social organization, how can we move from our internet-enabled atomized mode, toward a fluid mode? We take problems of meaning-making, typically considered spiritual, and turn them into practical problems, which are more tractable. You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks This is a nonfiction audiobook narrated by Matt Arnold with the permission of the author, David Chapman. Full text at: https://meaningness.com

Matt Arnold


    • Jan 19, 2025 LATEST EPISODE
    • monthly NEW EPISODES
    • 26m AVG DURATION
    • 155 EPISODES
    • 3 SEASONS


    Search for episodes from Fluidity with a specific topic:

    Latest episodes from Fluidity

    A Better Future, Without Backprop

    Play Episode Listen Later Jan 19, 2025 5:06


    This concludes "Gradient Dissent", the companion document to "Better Without AI". Thank you so much for listening! You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Better Text Generation With Science And Engineering

    Play Episode Listen Later Jan 12, 2025 38:20


    Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy's paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et al., “Language Models as Knowledge Bases?": https://aclanthology.org/D19-1250/ Gwern Branwen, “The Scaling Hypothesis”: gwern.net/scaling-hypothesis Rich Sutton's “Bitter Lesson”: www.incompleteideas.net/IncIdeas/BitterLesson.html Guu et al.'s “Retrieval augmented language model pre-training” (REALM): http://proceedings.mlr.press/v119/guu20a/guu20a.pdf Borgeaud et al.'s “Improving language models by retrieving from trillions of tokens” (RETRO): https://arxiv.org/pdf/2112.04426.pdf Izacard et al., “Few-shot Learning with Retrieval Augmented Language Models”: https://arxiv.org/pdf/2208.03299.pdf Chirag Shah and Emily M. Bender, “Situating Search”: https://dl.acm.org/doi/10.1145/3498366.3505816 David Chapman's original version of the proposal he puts forth in this episode: twitter.com/Meaningness/status/1576195630891819008 Lan et al. “Copy Is All You Need”: https://arxiv.org/abs/2307.06962 Mitchell A. Gordon's “RETRO Is Blazingly Fast”: https://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html Min et al.'s “Silo Language Models”: https://arxiv.org/pdf/2308.04430.pdf W. Daniel Hillis, The Connection Machine, 1986: https://www.amazon.com/dp/0262081571/?tag=meaningness-20 Ouyang et al., “Training language models to follow instructions with human feedback”: https://arxiv.org/abs/2203.02155 Ronen Eldan and Yuanzhi Li, “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?”: https://arxiv.org/pdf/2305.07759.pdf Li et al., “Textbooks Are All You Need II: phi-1.5 technical report”: https://arxiv.org/abs/2309.05463 Henderson et al., “Foundation Models and Fair Use”: https://arxiv.org/abs/2303.15715 Authors Guild v. Google: https://en.wikipedia.org/wiki/Authors_Guild%2C_Inc._v._Google%2C_Inc. Abhishek Nagaraj and Imke Reimers, “Digitization and the Market for Physical Works: Evidence from the Google Books Project”: https://www.aeaweb.org/articles?id=10.1257/pol.20210702 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Classifying Images: Massive Parallelism And Surface Features

    Play Episode Listen Later Jan 5, 2025 15:05


    Analysis of image classifiers demonstrates that it is possible to understand backprop networks at the task-relevant run-time algorithmic level. In these systems, at least, networks gain their power from deploying massive parallelism to check for the presence of a vast number of simple, shallow patterns. https://betterwithout.ai/images-surface-features This episode has a lot of links: David Chapman's earliest public mention, in February 2016, of image classifiers probably using color and texture in ways that "cheat": twitter.com/Meaningness/status/698688687341572096 Jordana Cepelewicz's “Where we see shapes, AI sees textures,” Quanta Magazine, July 1, 2019: https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/ “Suddenly, a leopard print sofa appears”, May 2015: https://web.archive.org/web/20150622084852/http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html “Understanding How Image Quality Affects Deep Neural Networks” April 2016: https://arxiv.org/abs/1604.04004   Goodfellow et al., “Explaining and Harnessing Adversarial Examples,” December 2014: https://arxiv.org/abs/1412.6572 “Universal adversarial perturbations,” October 2016: https://arxiv.org/pdf/1610.08401v1.pdf “Exploring the Landscape of Spatial Robustness,” December 2017: https://arxiv.org/abs/1712.02779 “Overinterpretation reveals image classification model pathologies,” NeurIPS 2021: https://proceedings.neurips.cc/paper/2021/file/8217bb4e7fa0541e0f5e04fea764ab91-Paper.pdf “Approximating CNNs with Bag-of-Local-Features Models Works Surprisingly Well on ImageNet,” ICLR 2019: https://openreview.net/forum?id=SkfMWhAqYQ Baker et al.'s “Deep convolutional networks do not classify based on global object shape,” PLOS Computational Biology, 2018: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006613 François Chollet's Twitter threads about AI producing images of horses with extra legs: twitter.com/fchollet/status/1573836241875120128 and twitter.com/fchollet/status/1573843774803161090 “Zoom In: An Introduction to Circuits,” 2020: https://distill.pub/2020/circuits/zoom-in/ Geirhos et al., “ImageNet-Trained CNNs Are Biased Towards Texture; Increasing Shape Bias Improves Accuracy and Robustness,” ICLR 2019: https://openreview.net/forum?id=Bygh9j09KX Dehghani et al., “Scaling Vision Transformers to 22 Billion Parameters,” 2023: https://arxiv.org/abs/2302.05442 Hasson et al., “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks,” February 2020: https://www.gwern.net/docs/ai/scaling/2020-hasson.pdf

    Do AI As Engineering Instead

    Play Episode Listen Later Dec 15, 2024 15:47


    Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems.   https://betterwithout.ai/AI-as-engineering   This episode has a lot of links! Here they are.   Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI   Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954   Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/   Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf   Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf   François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800   Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability   Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490   Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf   Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112   “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217   Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997   Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html   Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf   Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Do AI As Science Instead

    Play Episode Listen Later Oct 16, 2024 19:21


    Few AI experiments constitute meaningful tests of hypotheses. As a branch of machine learning research, AI science has concentrated on black box investigation of training time phenomena. The best of this work is has been scientifically excellent. However, the hypotheses tested are mainly irrelevant to user and societal concerns. https://betterwithout.ai/AI-as-science This chapter references Chapman's essay, "How should we evaluate progress in AI?" https://metarationality.com/artificial-intelligence-progress "Troubling Trends in Machine Learning Scholarship", Zachary C. Lipton and Jacob Steinhardt: https://arxiv.org/abs/1807.03341 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Do AI As Science And Engineering Instead

    Play Episode Listen Later Sep 8, 2024 12:21


    Do AI As Science And Engineering Instead - We've seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies.   https://betterwithout.ai/science-engineering-vs-AI   Run-Time Task-Relevant Algorithmic Understanding - The type of scientific and engineering understanding most relevant to AI safety is run-time, task-relevant, and algorithmic. That can lead to more reliable, safer systems. Unfortunately, gaining such understanding has been neglected in AI research, so currently we have little.   https://betterwithout.ai/AI-algorithmic-level   For more information, see David Chapman's 2017 essay "How should we evaluate progress in AI?" https://betterwithout.ai/artificial-intelligence-progress   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Backpropaganda: Anti-Rational Neuro-Mythology

    Play Episode Listen Later Aug 25, 2024 29:41


    Current AI results from experimental variation of mechanisms, unguided by theoretical principles. That has produced systems that can do amazing things. On the other hand, they are extremely error-prone and therefore unsafe. Backpropaganda, a collection of misleading ways of talking about “neural networks,” justifies continuing in this misguided direction.   https://betterwithout.ai/backpropaganda   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Artificial Neurons Considered Harmful, Part 2

    Play Episode Listen Later Aug 11, 2024 28:26


    The conclusion of this chapter.   So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad.  https://betterwithout.ai/artificial-neurons-considered-harmful Sayash Kapoor and Arvind Narayanan's "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": https://www.youtube.com/watch?v=bPgwwvjtX_g Another video showing a walk through latent space: https://www.youtube.com/watch?v=YnXiM97ZvOM You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Gradient Dissent- Artificial Neurons Considered Harmful, Part 1

    Play Episode Listen Later Jul 21, 2024 15:41


    This begins "Gradient Dissent", the companion material to "Better Without AI". The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives. https://betterwithout.ai/gradient-dissent Artificial Neurons Considered Harmful, Part 1 - So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad. https://betterwithout.ai/artificial-neurons-considered-harmful You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Futurism, Politics, and Responsibility

    Play Episode Listen Later Jun 23, 2024 22:09


    The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent. Cozy Futurism - If we knew we'd never get flying cars, most people wouldn't care. What do we care about? https://betterwithout.ai/cozy-futurism Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that! https://betterwithout.ai/meaningful-future The Inescapable: Politics - No realistic approach to future AI can avoid questions of power and social organization. https://betterwithout.ai/inescapable-AI-politics Responsibility https://betterwithout.ai/responsibility This is about you https://betterwithout.ai/about-you You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    A Future We Would Like

    Play Episode Listen Later Jun 16, 2024 12:52


    A Future We Would Like - The most important questions are not about technology but about us. What sorts of future would we like? What role could AI play in getting us there, and also in that world? What is your own role in helping that happen? https://betterwithout.ai/a-future-we-would-like How AI Destroyed The Future -We are doing a terrible job of thinking about the most important question because unimaginably powerful evil artificial intelligences are controlling our brains. https://betterwithout.ai/AI-destroyed-the-future A One-Bit Future - Superintelligence scenarios reduce the future to infinitely good or infinitely bad. Both are possible, but we cannot reason about or act toward them. Messy complicated good-and-bad futures are probably more likely, and in any case are more feasible to influence. https://betterwithout.ai/one-bit-future This episode mentions David Chapman's essay "Vaster Than Ideology" for getting AI out of your head. Text link: https://meaningness.com/vaster-than-ideology Episode link: https://fluidity.libsyn.com/vaster-than-ideology You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Scientific Progress Without AI

    Play Episode Listen Later May 26, 2024 20:01


    Stop obstructing scientific progress! We already know how to dramatically accelerate science: by getting out of the way.   https://betterwithout.ai/stop-obstructing-science   How to science better. What do exceptional scientists do differently from mediocre ones? Can we train currently-mediocre ones to do better?   https://betterwithout.ai/better-science-without-AI   Scenius: upgrading science FTW. Empirically, breakthroughs that enable great progress depend on particular, uncommon social constellations and accompanying social practices. Let's encourage these!   https://betterwithout.ai/human-scenius-vs-artificial-genius   Matt Clancy reviews the evidence for scientific progress slowing, with citations and graphs. https://twitter.com/mattsclancy/status/1612440718177603584   "Scenius, or Communal Genius", Kevin Kelly, The Technium. https://kk.org/thetechnium/scenius-or-comm/

    Limits To Experimental Induction

    Play Episode Listen Later Apr 21, 2024 11:35


    Progress requires experimentation. Suggested ways AI could speed progress by automating experiments appear mistaken. https://betterwithout.ai/limits-to-induction You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Bonus Episode 8: Going Down On The Phenomenon

    Play Episode Listen Later Apr 17, 2024 8:56


    Forgive the sound quality on this episode; I recorded it live in front of an audience on a platform floating in a lake during the 2024 solar eclipse. This is a standalone essay by David Chapman on metarationaity.com. How scientific research is like cunnilingus: a phenomenology of epistemology. https://metarationality.com/going-down-on-the-phenomenon You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    The Role Of Intelligence In Science

    Play Episode Listen Later Apr 7, 2024 15:11


    What Is The Role Of Intelligence In Science? Actually, what are “science” and “intelligence”? Precise, explicit definitions aren't necessary, but discussions of Transformative AI seem to depend implicitly on particular models of both. It matters if those models are wrong. https://betterwithout.ai/intelligence-in-science Katja Grace, “Counterarguments to the basic AI x-risk case”. https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/   What Do Unusually Intelligent People Do?  If we want to know what a superintelligent AI might do, and how, it could help to investigate what the most intelligent humans do, and how. If we want to know how to dramatically accelerate science and technology development, it could help to investigate what the best scientists and technologists do, and how. https://betterwithout.ai/what-intelligent-people-do   Patrick Collison and Tyler Cowen, “We Need a New Science of Progress,” The Atlantic, July 30, 2019. https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946/ Gwern Branwen, “Catnip immunity and alternatives”. https://www.gwern.net/Catnip#optimal-catnip-alternative-selection-solving-the-mdp   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.  

    Radical Progress Without Scary AI

    Play Episode Listen Later Mar 10, 2024 14:51


    Radical Progress Without Scary AI: Technological progress, in medicine for example, provides an altruistic motivation for developing more powerful AIs. I suggest that AI may be unnecessary, or even irrelevant, for that. We may be able to get the benefits without the risks. https://betterwithout.ai/radical-progress-without-AI What kind of AI might accelerate technological progress?: “Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe. Let's build those instead of Scary superintelligence. https://betterwithout.ai/what-AI-for-progress

    AI Is Net Harmful, and, A Negative Public Image For AI

    Play Episode Listen Later Feb 18, 2024 22:06


    Recognize that AI is probably net harmful: Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe. https://betterwithout.ai/AI-is-harmful Create a negative public image for AI: Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for AI would eliminate their incentive to fund it. https://betterwithout.ai/AI-is-public-relations Seth Lazar's "Legitimacy, Authority, and the Political Value of Explanations": https://arxiv.org/ftp/arxiv/papers/2208/2208.08628.pdf Kate Crawford's "Atlas Of AI": https://www.amazon.com/dp/B08WKQ1MTM/?tag=meaningness-20 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Spurn Artificial Ideology

    Play Episode Listen Later Feb 4, 2024 16:29


    “Apocalypse now” identified the corrosive influence of new viral ideologies, created unintentionally by recommender systems, as a major AI risk. These may cause social collapse if not tackled head-on. You can resist. https://betterwithout.ai/spurn-artificial-ideology Announcement tweet for the Opening Awareness, Opening Rationality discussion group starting on February 1: https://twitter.com/openingBklyn/status/1751314312415567956 Document with more details: https://docs.google.com/document/d/1YPaos3zTgdraF9VouWkHUouVHVsrbYBluUO3Kh--Ezs/edit Vaster Than Ideology (text): https://meaningness.com/vaster-than-ideology Vaster Than Ideology (Fluidity Audiobooks episode): https://fluidity.libsyn.com/vaster-than-ideology Coinbase Is A Mission Focused Company: https://www.coinbase.com/blog/coinbase-is-a-mission-focused-company You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Fight DOOM AI with SCIENCE! and ENGINEERING!!

    Play Episode Listen Later Jan 14, 2024 12:26


    Current AI practices produce technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of specific risks of current AI technology, and can lead to safer technologies.   https://betterwithout.ai/fight-unsafe-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Music is by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Mistrust Machine Learning

    Play Episode Listen Later Dec 31, 2023 16:20


    The technologies underlying current AI systems are inherently, unfixably unreliable. They should be deprecated, avoided, regulated, and replaced. https://betterwithout.ai/mistrust-machine-learning You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Music is by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Develop And Mandate Intrinsic Cybersecurity

    Play Episode Listen Later Dec 17, 2023 13:04


    Gaining unauthorized access to computer systems is a key source of power in many AI doom scenarios. That is easy now, because there are scant incentives for serious cybersecurity; so nearly all systems are radically insecure. Technical and political initiatives must mitigate this problem. https://betterwithout.ai/cybersecurity-vs-AI   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Practical Actions You Can Take Against AI Risks, and, End Digital Surveillance

    Play Episode Listen Later Dec 10, 2023 17:02


    Practical Actions You Can Take Against AI Risks: We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments. https://betterwithout.ai/pragmatic-AI-safety End Digital Surveillance: Databases of personal information collected via internet surveillance are a main resource for harmful AI. Eliminating them will alleviate multiple major risks. Technical and political approaches are both feasible. https://betterwithout.ai/end-digital-surveillance José Luis Ricón's “Set Sail For Fail? On AI risk”: https://nintil.com/ai-safety FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations: https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people-reproductive-health-clinics-places-worship-other Consumer Reports‘ “Security Planner”: https://securityplanner.consumerreports.org/ Wirecutter‘s “Every Step to Simple Online Security”: https://www.nytimes.com/wirecutter/guides/simple-online-security/ Narwhal Academy's Zebra Crossing: https://github.com/narwhalacademy/zebra-crossing Privacy Guides: https://www.privacyguides.org/ Installing a blocker is explicitly recommended by the FBI as a way to protect against cybercriminals: https://www.ic3.gov/Media/Y2022/PSA221221 The Electronic Frontier Foundation's page of actions you can take: https://act.eff.org/ The European Digital Rights organization (EDRi) page of simple ways you can influence EU privacy legislation: https://edri.org/take-action/our-campaigns/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Social Collapse: Apocalyptic Incoherence

    Play Episode Listen Later Dec 3, 2023 28:51


    This concludes the "Apocalypse Now" section of Better Without AI.   AI systems may cause near-term disasters through their proven ability to shatter societies and cultures. These might potentially cause human extinction, but are more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. It would be wise to anticipate possible harms in as much detail as possible.   https://betterwithout.ai/incoherent-AI-apocalypses   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Who Is In control Of AI? What An AI Apocalypse May Look Like

    Play Episode Listen Later Nov 12, 2023 16:53


    Who is in control of AI? - It may already be too late to shut down the existing AI systems that could destroy civilization. https://betterwithout.ai/AI-is-out-of-control What an AI apocalypse may look like - Scenarios in which artificial intelligence systems degrade critical institutions to the point of collapse seem to me not just likely, but well under way. https://betterwithout.ai/AI-safety-failure   This episode mentions the short story "Sort By Controversial" by Scott Alexander. Here is the audio version narrated by me: https://unsong.libsyn.com/sort-by-controversial You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Apocalypse Now - At War With The Machines

    Play Episode Listen Later Nov 5, 2023 36:50


    "In this audiobook... A LARGE BOLD FONT IN ALL CAPITAL LETTERS SOUNDS LIKE THIS."   Apocalypse now - Current AI systems are already harmful. They pose apocalyptic risks even without further technology development. This chapter explains why; explores a possible path for near-term human extinction via AI; and sketches several disaster scenarios.   https://betterwithout.ai/apocalypse-now   At war with the machines - The AI apocalypse is now.   https://betterwithout.ai/AI-already-at-war   This interview with Stuart Russell is a good starting point for the a literature on recommender alignment, analogous to AI alignment:   https://www.youtube.com/watch?v=vzDm9IMyTp8   You can support the podcast and get episodes a week early, by supporting the Patreon:   https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee:   https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Fear Power, Not Intelligence

    Play Episode Listen Later Oct 22, 2023 18:03


    Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.   https://betterwithout.ai/fear-AI-power   For much of the AI safety community, the central question has been “when will it happen?!” That is futile: we don't have a coherent description of what “it” is, much less how “it” would come about. Fortunately, a prediction wouldn't be useful anyway. An AI apocalypse is possible, so we should try to avert it.   https://betterwithout.ai/scary-AI-when   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Artificial General Intelligence And Transformative AI

    Play Episode Listen Later Oct 15, 2023 9:59


    Many people call the future threat “artificial general intelligence,” but all three words there are misleading when trying to understand risks.   https://betterwithout.ai/artificial-general-intelligence   AI may radically accelerate technology development. That might be extremely good or extremely bad. There are currently no good explanations for how either would happen, so it's hard to predict which, or when, or whether. The understanding necessary to guide the future to a good outcome may depend more on uncovering causes of technological progress than on reasoning about AI. https://betterwithout.ai/transformative-AI   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Motivation, Morals, and Monsters

    Play Episode Listen Later Oct 8, 2023 16:38


    Thanks for your patience while I ran Fluidity Forum. We now resume "Better Without AI" by David Chapman.   Speculations about autonomous AI assume simplistic theories of motivation. They also mistakenly confuse those with ethical theories. Building AI systems on these ideas would produce monsters. https://betterwithout.ai/AI-motivation   Coherent Extrapolated Volition https://betterwithout.ai/AI-motivation#fn_Turchin:~:text=%E2%80%9C-,Coherent%20Extrapolated%20Volition,-%E2%80%9D%20at%20LessWrong%2C%20undated   A.I. Alignment Problem: "Human Values" Don't Actually Exist https://www.lesswrong.com/posts/ngqvnWGsvTEiTASih/ai-alignment-problem-human-values-don-t-actually-exist   “Can we survive technology?” by John Von Neumann http://geosci.uchicago.edu/~kite/doc/von_Neumann_1955.pdf   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Diverse Forms Of Agency

    Play Episode Listen Later Sep 18, 2023 14:03


    It's a mistake to think that human-like agency is the only dangerous kind. That risks overlooking AIs causing agent-like harms in inhuman ways. https://betterwithout.ai/diverse-agency#fn_meme_critics You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Autonomous AI Agents

    Play Episode Listen Later Sep 10, 2023 9:29


    Most apocalyptic scenarios involve an AI acting as an autonomous agent, pursuing goals that conflict with human ones. Many people reject AI risk, saying that machines can't have real goals or intentions. However, agency seems nebulous; and subtracting “real” agency from the scenario doesn't seem to remove the risk.   https://betterwithout.ai/agency   A video in which white blood cells look as if they have agency:   https://www.youtube.com/watch?v=3KrCmBNiJRI   The US National Security Commission on Artificial Intelligence's 2021 Report, which recommends spending $32bn per year on AI research to dramatically increase weapon systems agency:   https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Mind-Like AI

    Play Episode Listen Later Sep 4, 2023 9:41


    We have a powerful intuition that some special mental feature, such as self-awareness, is a prerequisite to intelligence. This causes confusion because we don't have a coherent understanding of what the special feature is, nor what role it plays in intelligent action. It may be best to treat mental characteristics as in the eye of the beholder, and therefore mainly irrelevant to AI risks. https://betterwithout.ai/mind-like-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Scary AI, and, Superintelligence

    Play Episode Listen Later Aug 27, 2023 16:24


    Scary AI: Apocalyptic AI scenarios usually involve some qualitatively different future form of artificial intelligence. No one can explain clearly what would make that exceptionally dangerous in a way current AI isn't. This confusion draws attention away from risks of existing and near-future technologies, and from ways of forestalling them. https://betterwithout.ai/scary-AI Superintelligence: Maybe AI will kill you before you finish reading this section. The extreme scenarios typically considered by the AI safety movement are possible in principle, but unfortunately no one has any idea how to prevent them. This book discusses moderate catastrophes instead, offering pragmatic approaches to avoiding or diminishing them. https://betterwithout.ai/superintelligence You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold] Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Only You Can Stop An AI Apocalypse

    Play Episode Listen Later Aug 20, 2023 16:01


    We now begin narrating the book Better Without AI, by David Chapman.   https://betterwithout.ai/only-you-can-stop-an-AI-apocalypse   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold] Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    A Fully Metarational Workplace

    Play Episode Listen Later May 22, 2023 17:46


    A meta-rational organization may appear chaotic (although productive and innovative), until you notice how smoothly routine rational work gets done. https://metarationality.com/meta-rational-workplace This is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version. You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Doing Being Rational

    Play Episode Listen Later May 14, 2023 31:33


    Fine-grained analysis of a molecular biology how-to video reveals significant features of rationality in practice. https://metarationality.com/rational-pcr   This is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version.   The Britannica entry on PCR: https://www.britannica.com/science/polymerase-chain-reaction   The Khan Academy explainer on PCR: https://www.youtube.com/watch?v=nHi-3jP6Mvc&ab_channel=KhanAcademy   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Upgrade Your Cargo Cult For The Win

    Play Episode Listen Later May 7, 2023 58:02


    Part V of In The Cells Of The Eggplant: Taking Rational Work Seriously   Putting meta-rationality to work, in statistics, experimental science, software development, and entrepreneurship. https://metarationality.com/applications   Richard Feynman derided “cargo cult science” that sticks to fixed systems. Innovation requires an upgrade to fluid, meta-systematic inquiry. https://metarationality.com/upgrade-your-cargo-cult   This is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version.   The full text of Richard Feynman's address about cargo cult science: http://calteches.library.caltech.edu/51/2/CargoCult.htm   David Chapman's description of how most people can't draw a bicycle: https://meaningness.com/understanding#bicycles   An interview with Lucy Suchman in which she mentions David Chapman and Phil Agre's work, among many other things: https://web.archive.org/web/20200608155536/http://www.iwp.jku.at/born/mpwfst/02/www.dialogonleadership.org/Suchmanx1999.html   "Scenius, Or Communal Genius" by Kevin Kelly: https://kk.org/thetechnium/scenius-or-comm/   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    What They Don't Teach You At STEM School

    Play Episode Listen Later Apr 23, 2023 49:23


    The syllabus for a curriculum teaching meta-rational skills: how to evaluate, combine, modify, discover, and create effective systems. https://metarationality.com/meta-rationality-curriculum   This is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version.   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Ignorant, Irrelevant, And Inscrutable

    Play Episode Listen Later Apr 16, 2023 25:44


    Distinguishing irrational, anti-rational, and meta-rational critiques of rationalism helps reply effectively. https://metarationality.com/rationalism-critiques This is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version. You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Judging Whether A System Applies

    Play Episode Listen Later Apr 9, 2023 52:14


    Just as in the last episode, this is one of several standalone essays by David Chapman I'm incorporating into the unwritten sections of In The Cells Of The Eggplant, for the audiobook version. It fits well into Part 4: Taking Meta-Rationality Seriously.   Rationality requires judging whether a system of reasoning applies to a situation — but that judgement cannot be systematic! https://metarationality.com/meta-systematic-judgement   Links mentioned in this episode:   A webcomic by Saturday Morning Breakfast Cereal about probability: https://www.smbc-comics.com/index.php?id=4127   David Chapman's essay, "Nutrition offers its resignation. And the reply": https://metarationality.com/nutrition-resigns   "July 4th And The Extraordinary Providential Deaths Of John Adams, Thomas Jefferson and James Monroe", a textbook example of religious eternalism, political eternalism, and rationalist eternalism, all rolled into one: https://web.archive.org/web/20160314212725/http://www.apollospeaks.com/?p=4354   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Part 4: Taking Meta-Rationality Seriously

    Play Episode Listen Later Apr 2, 2023 45:00


    The heart of the meta-rationality book: what meta-rationality is, why it matters, and how to do it.   https://metarationality.com/meta-rationality   A first lesson in meta-rationality, or stage 5 cognition, using Bongard problems as a laboratory. This is an essay from metarationality.com edited and inserted into In The Cells Of The Eggplant, with the permission of the author.   https://metarationality.com/bongard-meta-rationality   Some links in the episode:   Index of Bongard problems   http://www.foundalis.com/res/bps/bpidx.htm   Brian Cantwell Smith: The philosophy of computation - meaning, mechanism, mystery https://www.youtube.com/watch?v=USF1H70bRl0&ab_channel=Andr%C3%A9SouzaLemos Alexandre Linhares' “A glimpse at the metaphysics of Bongard problems.” (Through the Internet Archive) https://web.archive.org/web/20220327091633/http://app.ebape.fgv.br/comum/arq/Linhares2.pdf   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Ontological Remodeling

    Play Episode Listen Later Mar 26, 2023 56:01


    Reconfiguring categories, properties, and relationships is a meta-rational skill—key in scientific revolutions.   https://metarationality.com/remodeling   Be advised, this episode is an hour long.   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    The Parable Of The Pebbles

    Play Episode Listen Later Mar 12, 2023 15:39


    Even counting, the simplest rational method, works only with the aid of non-rational support. https://metarationality.com/pebbles You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Taking Rationality Seriously

    Play Episode Listen Later Mar 5, 2023 49:08


    A pragmatic understanding of how systematic rationality works in practice can help you level up your technical work. https://metarationality.com/rationality   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Reasonable Ontology

    Play Episode Listen Later Feb 19, 2023 27:43


    Reasonableness works with nebulous, tacit, interactive, accountable, purposeful ontologies, which enable everyday routine activity. https://metarationality.com/reasonable-ontology   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Instructed Activity

    Play Episode Listen Later Feb 12, 2023 14:50


    Using instructions requires figuring out what they mean in the context of your activity, and relative to your purposes. https://metarationality.com/reasonable-ontology You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Reasonable Believings

    Play Episode Listen Later Feb 5, 2023 66:47


    This episode is more than an hour long.   The epistemological categories—truth, belief, inference—are richer, more complex, diverse, and nebulous than rationalism supposes. https://metarationality.com/reasonable-epistemology   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    The Purpose Of Meaning

    Play Episode Listen Later Jan 22, 2023 21:40


    Peculiar features of language make sense as tools to enable collaboration, rather than to express objective truths. https://metarationality.com/purpose-of-meaning You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Meaningful Perception

    Play Episode Listen Later Jan 15, 2023 25:04


    We actively work to perceive aspects of the world as meaningful, in terms of our purposes, in context. https://metarationality.com/perception   Here are the images mentioned in this episode:   In this episode is a mention of a perception test of tracking basketball players passing a ball: https://www.youtube.com/watch?v=vJG698U2Mvo Also mentioned in this episode is a more advanced version of the perception test: https://www.youtube.com/watch?v=IGQmdoK_ZfY   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Accountability And Routine

    Play Episode Listen Later Jan 8, 2023 29:20


    You are accountable for reasonableness: Accountability is the key concept in understanding mere reasonableness, as contrasted with systematic rationality. https://metarationality.com/accountability   Reasonableness is routine: Routine activity usually goes smoothly overall, despite frequent minor glitches, because we have methods for repairing trouble. https://metarationality.com/routine   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Aspects Of Reasonableness - and - Reasonableness Is Meaningful Activity

    Play Episode Listen Later Jan 1, 2023 19:08


    Aspects of reasonableness: A summary explanation of everyday reasonable activity, with a tabular guide and a concrete example. https://metarationality.com/reasonableness-aspects   Reasonableness is meaningful activity: Understanding concrete, purposeful activity is a prerequisite to understanding the formal rationality that depends on it. https://metarationality.com/meaningful-activity   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod.   This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Part Two: Taking Reasonableness Seriously

    Play Episode Listen Later Dec 25, 2022 55:06


    Everyday reasonableness is the foundation of technical, formal, and systematic rationality. https://metarationality.com/reasonableness   This is not cognitive science - The Eggplant is neither cognitive nor science, although it seeks a better understanding of some phenomena cognitive science has studied. https://metarationality.com/cognitive-science   The ethnomethodological flip - A dramatic perspective shift: understanding rationality as dependent on mere reasonableness to connect it with reality. https://metarationality.com/ethnomethodological-flip   You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks   If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold   Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

    Claim Fluidity

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel