Podcasts about mechanistic

  • 141PODCASTS
  • 270EPISODES
  • 40mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 15, 2025LATEST
mechanistic

POPULARITY

20172018201920202021202220232024


Best podcasts about mechanistic

Show all podcasts related to mechanistic

Latest podcast episodes about mechanistic

Rehab and Performance Lab: A MedBridge Podcast
Rehab and Performance Lab Episode 14: What is Evidence-Based in Cupping and Fascial Science?

Rehab and Performance Lab: A MedBridge Podcast

Play Episode Listen Later Apr 15, 2025 49:30


Christopher DaPrato, PT, DPT, SCS, joins host Phil Plisky to explore the evidence behind cupping and its role in rehab and performance. They break down the latest research on fascial mechanics, the benefits of movement-based cupping, and practical strategies for clinical application. Tune in to challenge common misconceptions and learn how to integrate cupping effectively into patient care.Learning ObjectivesAnalyze the evidence around cupping and its use in rehab and performance settingsApply evidence-based, practical strategies to actionably address mobility deficits, stability deficits, or motor control deficitsSolve patient case scenarios involving mobility deficits with loading strategies, postural awareness during movement education, and muscle synergies with overuse and dominant muscle hyperexcitabilityTimestamps(00:00:00) Welcome(00:00:49) Introduction to cupping and myofascial decompression(00:03:30) The importance of active modality in cupping(00:07:48) Research landscape: evidence and methodology in cupping(00:10:50) Challenges in cupping research and study design(00:15:53) Mechanistic studies and depth of cupping effects(00:20:05) Future directions and clinical implications of cupping(00:24:43) The power of manual therapy(00:28:45) Clinical reasoning in cupping therapy(00:36:02) Understanding the neurophysiological effects(00:37:28) Case studies in cupping application(00:42:15) Cupping for recovery: myths and realities(00:44:12) Key takeaways for practitionersRehab and Performance Lab is brought to you by Medbridge. If you'd like to earn continuing education credit for listening to this episode and access bonus takeaway handouts, log in to your Medbridge account and navigate to the course where you'll find accreditation details. If applicable, complete the post-course assessment and survey to be eligible for credit. The takeaway handout on Medbridge gives you the key points mentioned in this episode, along with additional resources you can implement into your practice right away.To hear more episodes of Rehab and Performance Lab, visit https://www.medbridge.com/rehab-and-performance-labIf you'd like to subscribe to Medbridge, visit https://www.medbridge.com/pricing/

Ten Minute Bible Talks Devotional Bible Study
The Mistakes of a Mechanistic Faith | Historical Books | 1 Samuel 4:1-11

Ten Minute Bible Talks Devotional Bible Study

Play Episode Listen Later Apr 3, 2025 9:12


Are there sacred objects? Do you have a mechanistic faith? Do you treat God like a vending machine? In today's episode, Jensen shares how 1 Samuel 4:1-11 encourages us to fear God and enjoy his presence. If you're listening on Spotify, comment below one takeaway from today's episode! Read the Bible with us in 2025! This year, we're exploring the Historical Books—Joshua, Judges, 1 & 2 Samuel, and 1 & 2 Kings. Download your reading plan now. Your support makes TMBT possible. Ten Minute Bible Talks is a crowd-funded project. Join the TMBTeam to reach more people with the Bible. Give now. Like this content? Make sure to leave us a rating and share it so that others can find it, too. Use #asktmbt to connect with us, ask questions, and suggest topics. We'd love to hear from you! To learn more, visit our website and follow us on Instagram, Facebook, and Twitter @TenMinuteBibleTalks. Don't forget to subscribe to the TMBT Newsletter here. Passages: 1 Samuel 4:1-11

Under the Influence with Martin Harvey
The Great Health Divide: AI, Vitalism & the Future of Chiropractic with Dr Nimrod Mueller

Under the Influence with Martin Harvey

Play Episode Listen Later Mar 26, 2025 51:59


In this episode, Martin and Nimrod dive into the cultural undercurrents shaping the future of health—and what chiropractors need to get before it's too late.Wearables are rising.AI is thinking faster than you can blink.Stress is peaking across the Western world.Amidst it all, chiropractic stands at a crossroads.⚡ Go left → Mechanistic, transactional, data-driven “healthcare.”⚡ Go right → Vitalistic, human-centered, performance-driven care.What side are you on?And what happens if you don't choose?They cover:The coming stratification of societyAI, nanotech, and the illusion of quick fixesWhy human touch still matters (and always will)The "ontological shock" shaking people's sense of meaningHow chiropractors can stay relevant without selling outThis is part philosophy, part strategy—and all signal, no noise.If you're a chiropractor wondering what the future holds, press play. It's not just about where healthcare is going. It's about where you are headed.Learn more about Daily Visit Communication 2.0https://insideoutpractices.thinkific.com/courses/daily-visitCheck out the Retention Recipe https://insideoutpractices.thinkific.com/courses/retention-recipe-2-0Check out Certainty 2.0 https://insideoutpractices.thinkific.com/courses/certainty-2-0Email me - martin@insideoutpractices.com

The Adaptive Zone
Do Insoles Help With Running Injuries? | With Francis Del Duchetto, Podiatrist

The Adaptive Zone

Play Episode Listen Later Feb 5, 2025 33:31


Connect with Francis: Research Gate: Francis Del Duchetto Review Paper: Can Foot Orthoses Beneft Symptomatic Runners? Mechanistic and Clinical Insights Through a Scoping Review Summary In this episode of the Adaptive Zone podcast, Matthew Boyd interviews Francis Del Duchetto, a podiatrist, about the role of foot orthoses in treating running injuries. They discuss the different types of orthoses, the importance of research in understanding their effects, and how they can help redistribute loads in injured runners. The episode also covers the clinical applications of orthoses, their preventative use, and the differences between custom and prefabricated options. Francis shares insights on the long-term use of orthoses and the importance of a comprehensive treatment approach for runners. Takeaways Foot orthoses are medical devices used to treat lower limb injuries. There are different types of foot orthoses: simple, prefabricated, and custom-made. Research on foot orthoses for injured runners is limited but growing. Foot orthoses can help redistribute loads from injured to non-injured structures. They are most effective when used as part of a multimodal treatment approach. Custom orthoses may not always be necessary; prefabricated options can be effective. Preventative use of orthoses is debated; other factors may be more important. Long-term use of orthoses should be monitored and adjusted as needed. The choice of running shoes can affect the effectiveness of orthoses. Research and clinical practice should continue to evolve in this area. Chapters 00:00 Introduction to Foot Orthoses 02:45 Understanding Foot Orthoses and Their Types 05:45 The Importance of Research on Foot Orthoses 09:12 Mechanisms of Action: How Orthoses Help Runners 11:48 Clinical Applications: Treating Common Running Injuries 15:00 Preventative Use of Orthoses: A Discussion 18:11 Long-term Use and Adaptation to Orthoses 21:11 Custom vs. Prefabricated Orthoses 24:00 Future Directions in Research and Practice Connect with Us: If you're an injured runner we can help you get back to running pain-free. Book a free call with us: https://matthewboydphysio.com/booking/ Running Fundamentals Course: https://matthewboydphysio.com/running-fundamentals-course/ Instagram: https://www.instagram.com/matthewboydphysio/

The Adaptive Zone
Do Runners Need Insoles / Orthotics?

The Adaptive Zone

Play Episode Listen Later Dec 25, 2024 17:22


Duchetto (2024) Can Foot Orthoses Benefit Symptomatic Runners? Mechanistic and Clinical Insights Through a Scoping Review If you're an injured runner we can help you get back to running pain-free. Click the link to book a free call with us https://matthewboydphysio.com/booking/ Running Fundamentals Course https://matthewboydphysio.com/running-fundamentals-course/ Instagram https://www.instagram.com/matthewboydphysio/ Summary This episode explores the role of insoles and orthotics in running, discussing their purpose, historical context, and current research findings. Matthew Boyd delves into how orthotics are used to improve alignment and prevent injuries, the shift in professional attitudes towards their use, and the latest evidence regarding their effectiveness for various running-related injuries. He emphasizes that while orthotics can be beneficial for specific conditions, their prophylactic use is not supported by current research. Takeaways Insoles and orthotics are used interchangeably in running. Historically, orthotics were prescribed prophylactically for flat feet. Current research questions the effectiveness of prophylactic orthotic use. Orthotics can help reduce knee and shin pain in runners. Custom orthotics may not be significantly more effective than off-the-shelf options. Orthotics should be part of a comprehensive rehabilitation strategy. Runners can wean off orthotics if they no longer need them. The effectiveness of orthotics varies by individual and condition. Orthotics are not a silver bullet for injury prevention. Health professionals' attitudes towards orthotics have evolved over time.

Machine Learning Street Talk
Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)

Machine Learning Street Talk

Play Episode Listen Later Dec 7, 2024 222:36


Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020. Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ *** SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!): https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0 We riff on: * How neural networks develop meaningful internal representations beyond simple pattern matching * The effectiveness of chain-of-thought prompting and why it improves model performance * The importance of hands-on coding over extensive paper reading for new researchers * His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind * The role of mechanistic interpretability in AI safety NEEL NANDA: https://www.neelnanda.io/ https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en https://x.com/NeelNanda5 Interviewer - Tim Scarfe TOC: 1. Part 1: Introduction [00:00:00] 1.1 Introduction and Core Concepts Overview 2. Part 2: Outside Interview [00:06:45] 2.1 Mechanistic Interpretability Foundations 3. Part 3: Main Interview [00:32:52] 3.1 Mechanistic Interpretability 4. Neural Architecture and Circuits [01:00:31] 4.1 Biological Evolution Parallels [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads [01:11:07] 4.3 Entity Detection and Knowledge Boundaries [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching 5. Model Behavior Analysis [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification [01:33:27] 5.2 Model Personas and RLHF Behavior Modification [01:36:28] 5.3 Steering Vectors and Linear Representations [01:40:00] 5.4 Hallucinations and Model Uncertainty 6. Sparse Autoencoder Architecture [01:44:54] 6.1 Architecture and Mathematical Foundations [02:22:03] 6.2 Core Challenges and Solutions [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis 7. Feature Learning and Scaling [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters [03:02:46] 7.2 Scaling Laws and Training Stability [03:11:00] 7.3 Feature Identification and Bias Correction [03:19:52] 7.4 Training Dynamics Analysis Methods 8. Engineering Implementation [03:23:48] 8.1 Scale and Infrastructure Requirements [03:25:20] 8.2 Computational Requirements and Storage [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation [03:37:15] 8.4 Latent Structure Inference in Language Models

Neuropsychopharmacology Podcast
Biomarker development for menstrual Cycle affective change: the need for greater temporal, mechanistic, and phenotypic specificity.

Neuropsychopharmacology Podcast

Play Episode Listen Later Dec 2, 2024 9:39


The menstrual cycle is known to affect things like mood and changes in pain. But there can also be symptoms that have a serious impact on a person's function, ability to work, ability to maintain friendships and romantic relationships. This is a rare condition known as premenstrual dysphoric disorder. But it's not the only psychiatric condition that can worsen with changes in the menstrual cycle. For instance, nearly 60 percent of menstruating patients with depression can experience cyclical worsening similar to PMDD. Conditions such as these are generally referred to as menstrual cycle affective change. Menstrual cycle affective change is more common in those with chronic psychiatric disorders. The authors are interested in reframing the conversation around menstrual cycle affective change to be something that is a more fundamental process that we can study across disorders, across categories, and identify biomarkers that might help us predict who's going to have those symptoms in more complex ways than we might be able to do with categories. This paper represents how can we take this dimensional way of thinking about menstrual cycle affective change and talk about the specific ways that we can be precise in looking at the time the time characteristics of that, the specific mechanisms, et cetera. Tory Eisenlohr-Moul is an associate professor of psychiatry at the University of Illinois at Chicago in the department of psychiatry, and she's one of the authors. Jordan Barone is an MD/PhD candidate at the University of Illinois at Chicago, and she's another author. Hosted on Acast. See acast.com/privacy for more information.

JACC Podcast
Reaffirmation of Mechanistic Proteomic Signatures Accompanying SGLT2 Inhibition in Heart Failure: a EMPEROR Validation Cohort

JACC Podcast

Play Episode Listen Later Nov 4, 2024 11:30


In this episode, Dr. Valentin Fuster discusses groundbreaking research on SGLT2 inhibitors and their impact on heart failure, highlighting the validation of mechanistic proteomic signatures from a major clinical trial. The study reveals how empagliflozin influences over 2,000 proteins, promoting autophagy, enhancing mitochondrial health, and normalizing kidney function, offering new insights into therapeutic strategies for heart failure management.

Honest Talk About Heartbreak, Dating and Relationships
The Secret to Finding Purpose in Your Work (Through VALUES)

Honest Talk About Heartbreak, Dating and Relationships

Play Episode Listen Later Nov 4, 2024 48:21 Transcription Available


Do you feel your work brings you purpose?In all my work with individuals, I've seen three core needs. The first two dominate our early career.To belong and feel part of a tribe.To gain status and value within that tribe.We seek out the field we want to be in. And how we can fulfil our ambitions in that field. But often many then feel a sense that something is missing.A sense of purpose.This relates to the third need. The desire to want to be part of something meaningful. And this is what Simon Gallagher coaches Clients on.In today's podcast we talked about his journey from PHD Chemist to Beer Brewer to Purpose Coach.Links:Simon Gallagher's Linkedin00:00 Finding Your Purpose00:45 Understanding Core Values01:20 A Journey Through Education02:21 Discovering Coaching03:50 The Power of LinkedIn05:16 The Irish Influence13:32 Transition to Coaching19:25 The Coaching Landscape24:45 Values and Purpose: The Foundation of Coaching26:13 The Role of Humor in Personal Values27:27 Navigating LinkedIn with Authenticity and Humor30:59 The Importance of Visual Content on LinkedIn32:37 Mechanistic vs. Aspirational Values37:53 Coaching for Clarity and Performance44:00 The Challenge of Freedom and Self-Validation46:36 The Burden of Comparison and the Education System47:20 Final Thoughts and Future Conversations

AJP-Heart and Circulatory Podcasts
Guidelines for Mechanistic Modeling and Analysis in Cardiovascular Research

AJP-Heart and Circulatory Podcasts

Play Episode Listen Later Oct 29, 2024 30:02


In our latest episode, Dr. Jeff Saucerman (University of Virginia) interviews authors Dr. Naomi Chesler (University of California, Irvine) and Dr. Mitchel Colebank (University of South Carolina) about their new Guidelines in Cardiovascular Research article on incorporating mechanistic modeling into the analysis of experimental and clinical data to identify possible mechanisms of (ab)normal cardiovascular physiology. The authors' goal is to provide a consensus document that identifies best practices for in silico computational modeling in cardiovascular research. These guidelines provide the necessary methods for mechanistic model development, model analysis, and formal model calibration using fundamentals from statistics. Colebank et al. outline rigorous practices for computational, mechanistic modeling in cardiovascular research and discuss its synergistic value to experimental and clinical data. Would you like to understand how to apply a cone of uncertainty to your experimental data? Listen now to find out more.   Mitchel J. Colebank, Pim A. Oomen, Colleen M. Witzenburg, Anna Grosberg, Daniel A. Beard, Dirk Husmeier, Mette S. Olufsen, and Naomi C. Chesler Guidelines for mechanistic modeling and analysis in cardiovascular research Am J Physiol Heart Circ Physiol, published August 6, 2024. DOI: 10.1152/ajpheart.00253.2024

The Innovation Show
Stan Deetz - Leading Organizations through Transition: Communication and Cultural Change

The Innovation Show

Play Episode Listen Later Oct 18, 2024 52:37


Stan Deetz - Transforming Organizational Culture: Insights and Strategies for Modern Success In this comprehensive episode, we explore pivotal topics in organizational culture and change management with experts like Stanley Deetz. From understanding the role of communication in periods of transition and mergers to building resilience and effective leadership, our discussions cover a wide range of issues critical to the modern workplace. We delve into the historical shifts in corporate culture, the influence of Japanese practices on American companies, and the evolving mindsets driven by generational changes and Artificial Intelligence. Learn about the power of systems thinking and organic metaphors in fostering innovation and teamwork. Discover essential strategies for managing change, overcoming fear, and leveraging diversity for organizational success. Join us to gain profound insights and practical tools for navigating and transforming organizational culture. 00:00 Introduction to Organizational Culture and Change 01:07 Origins and Development of the Book 02:24 Understanding Organizational Culture 02:50 Seton Hall and Online Education 04:59 Navigating Organizational Change 05:48 Managing Hearts, Minds, and Souls 10:47 The Role of Conflict in Innovation 18:10 Historical Shifts in Corporate Culture 26:15 Internal Models vs. External Realities 26:51 Generational Shifts in Organizational Metaphors 29:06 Cultural Fragmentation and Countercultures 31:00 Mechanistic vs. Organic Metaphors 32:33 Psychologizing Organizational Change 39:38 Systemic Thinking in Organizations 44:05 Challenges in Team Dynamics 46:43 Understanding Assumptions in Change Management 51:21 Conclusion and Contact Information Find the episode we mentioned with George Lakoffat 32.25 with here: Stan Deetz, Stanley Deetz, Organizational culture, communication, Aidan McCullen, cultural change, leadership, organizational transitions, mergers, technological innovations, globalization, Seton Hall University, ethical issues, member involvement, executive master's program, organizational development,  change processes, corporate culture, workplace dynamics

The Gradient Podcast
Jacob Andreas: Language, Grounding, and World Models

The Gradient Podcast

Play Episode Listen Later Oct 10, 2024 112:43


Episode 140I spoke with Professor Jacob Andreas about:* Language and the world* World models* How he's developed as a scientistEnjoy!Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:40) Jacob's relationship with grounding fundamentalism* (05:21) Jacob's reaction to LLMs* (11:24) Grounding language — is there a philosophical problem?* (15:54) Grounding and language modeling* (24:00) Analogies between humans and LMs* (30:46) Grounding language with points and paths in continuous spaces* (32:00) Neo-Davidsonian formal semantics* (36:27) Evolving assumptions about structure prediction* (40:14) Segmentation and event structure* (42:33) How much do word embeddings encode about syntax?* (43:10) Jacob's process for studying scientific questions* (45:38) Experiments and hypotheses* (53:01) Calibrating assumptions as a researcher* (54:08) Flexibility in research* (56:09) Measuring Compositionality in Representation Learning* (56:50) Developing an independent research agenda and developing a lab culture* (1:03:25) Language Models as Agent Models* (1:04:30) Background* (1:08:33) Toy experiments and interpretability research* (1:13:30) Developing effective toy experiments* (1:15:25) Language Models, World Models, and Human Model-Building* (1:15:56) OthelloGPT's bag of heuristics and multiple “world models”* (1:21:32) What is a world model?* (1:23:45) The Big Question — from meaning to world models* (1:28:21) From “meaning” to precise questions about LMs* (1:32:01) Mechanistic interpretability and reading tea leaves* (1:35:38) Language and the world* (1:38:07) Towards better language models* (1:43:45) Model editing* (1:45:50) On academia's role in NLP research* (1:49:13) On good science* (1:52:36) OutroLinks:* Jacob's homepage and Twitter* Language Models, World Models, and Human Model-Building* Papers* Semantic Parsing as Machine Translation (2013)* Grounding language with points and paths in continuous spaces (2014)* How much do word embeddings encode about syntax? (2014)* Translating neuralese (2017)* Analogs of linguistic structure in deep representations (2017)* Learning with latent language (2018)* Learning from Language (2018)* Measuring Compositionality in Representation Learning (2019)* Experience grounds language (2020)* Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe

ExplAInable
Mechanistic Interpretability עם מייק ארליכסון

ExplAInable

Play Episode Listen Later Oct 8, 2024 41:58


בפרק הבא, נצלול לנושא המורכב של מכניסטית אינטרפרטביליות (Mechanistic Interpretability) במודלים של שפה, וננסה לפשט את הדרך בה ניתן להבין מה מתרחש בתוך מודלים אלה. נסביר כיצד מודלים בשימוש רחב כמו טרנספורמרים ומבנים מתמטיים אחרים מנסים לפענח את מנגנוני השפה, אבל נדבר גם על הקושי להבין מה באמת קורה "מאחורי הקלעים" של רשתות נוירונים עמוקות. נבין את השיטות השונות לפרש את פעולת המודל תוך התמקדות בניתוח המכניסטי של שכבות ותהליכים פנימיים.   הבלוג ששווה שתכירו

Demystifying Science
Transactional Quantum Mechanics - Dr. Ruth Kastner - DSPod #280

Demystifying Science

Play Episode Listen Later Sep 8, 2024 125:32


Dr. Ruth Kastner is a historian of Physics and philosopher of Science who is preoccupied with rational interpretations of quantum mechanics. She serves as the third pole of the transactional quantum mechanics big tent where she, alongside John Cramer and Carver Mead, argue that the apparent mysteries of quantum mechanics can be rationalized by modeling everything from light to gravity as an exchange between atoms. Our conversation explores how she became enamored with this alternative approach to physics, the question of how accurate our models really are when it comes to the subatomic world, why the word "electron" is hopelessly confused, and why moving backwards in space and time aren't as different as they might seem at first glance. Sign up for our Patreon and get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB AND rock some Demystify Gear to spread the word: https://demystifysci.myspreadshop.com/ OR do your Amazon shopping through this link: https://amzn.to/4g2cPVV (00:00:00) Go! (00:05:21) Evolution of Transactional Interpretations in Quantum Mechanics (00:07:12) Philosophical Motivations Behind Transactional Interpretations (00:23:26) Mechanistic vs. holistic views of nature (00:27:09) Quantum mechanics and mechanistic explanations (00:32:24) Wave functions and physical reality (00:46:14) Quantum Measurement and Macroscopic Objects (00:49:00) Electron Energy Eigenstates and Stability (00:55:44) Wave-Particle Duality and Transactional Interpretation (00:59:51) Quantum Measurement Problem and Direct Action Theory (01:00:46) Quantum Field Dynamics and Mutual Communication (01:08:30) Future Influence on Present Actions (01:11:03) Actualized vs. Possible Futures (01:14:19) Quantum Indeterminacy and Hidden Variables (01:21:56) Free Will and Determinism (01:31:15) Life as entropy and disorder (01:32:24) Consciousness and non-human life forms (01:39:21) Complexity in living systems (01:45:20) Challenges in quantum theory interpretations (01:53:30) Ontology and Space-Time Concept (01:55:07) Relativistic Quantum Direct Action Theory (02:00:47) Charge Interpretation and Field Source Dynamics #sciencepodcast, #longformpodcast, #QuantumMechanics #TransactionalInterpretation #PhilosophyOfScience #MechanisticVsHolistic #WaveFunctionReality #QuantumMeasurement #MacroscopicObjects #ElectronEnergy #WaveParticleDuality #DirectActionTheory #QuantumFieldDynamics #FutureInfluence #PossibleFutures #QuantumIndeterminacy #HiddenVariables #FreeWillVsDeterminism #LifeAndEntropy #ConsciousnessStudies #NonHumanLife #ComplexSystems #QuantumTheoryChallenges #Ontology #SpaceTime #RelativisticQuantum #ChargeDynamics #FieldSourceTheory #QuantumPhysics #QuantumPhilosophy #Determinism #QuantumFieldTheory #InterpretationOfQuantumTheory #WaveFunction Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671

LivDerm Digital Highlights Hub
Choosing the Right Path: JAK Inhibitors or Biologics for Atopic Dermatitis?

LivDerm Digital Highlights Hub

Play Episode Listen Later Sep 5, 2024 57:34


This is the fifth episode of our new "Program on JAK Inhibitors for Atopic Dermatitis" series, hosted by expert dermatologist Dr. Christopher Bunick, MD, PhD. This episode, "Choosing the Right Path: JAK Inhibitors or Biologics for Atopic Dermatitis?", featuring expert dermatologist Dr. Naiem Issa, MD, PhD, discusses the following topics: Mechanistic differences between JAKi and biologics for treatment of AD Clinical data supporting the safety of JAKi in head-to-head trials compared to biologics Pearls for prescribing JAKi to AD patients Strategies for choosing JAKi vs biologics based on safety and patient factors

Brain Inspired
BI 192 Àlex Gómez-Marín: The Edges of Consciousness

Brain Inspired

Play Episode Listen Later Aug 28, 2024 90:34


Support the show to get full episodes and join the Discord community. Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on. So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while. Alex's website: The Behavior of Organisms Laboratory. Twitter: @behaviOrganisms. Previous episodes: BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness. BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology. Related: The Consciousness of Neuroscience. Seeing the consciousness forest for the trees. The stairway to transhumanist heaven. 0:00 - Intro 4:13 - Evolving viewpoints 10:05 - Near-death experience 18:30 - Mechanistic neuroscience vs. the rest 22:46 - Are you doing science? 33:46 - Where is my. mind? 44:55 - Productive vs. permissive brain 59:30 - Panpsychism 1:07:58 - Materialism 1:10:38 - How to choose what to do 1:16:54 - Fruit flies 1:19:52 - AI and the Singularity

Popular Mechanistic Interpretability: Goodfire Lights the Way to AI Safety

Play Episode Listen Later Aug 17, 2024 115:33


Nathan explores the cutting-edge field of mechanistic interpretability with Dan Balsam and Tom McGrath, co-founders of Goodfire. In this episode of The Cognitive Revolution, we delve into the science of understanding AI models' inner workings, recent breakthroughs, and the potential impact on AI safety and control. Join us for an insightful discussion on sparse autoencoders, polysemanticity, and the future of interpretable AI. Papers Very accessible article on types of representations: Local vs Distributed Coding Theoretical understanding of how models might pack concepts into their representations: Toy Models of Superposition How structure in the world gives rise to structure in the latent space: The Geometry of Categorical and Hierarchical Concepts in Large Language Models Using sparse autoencoders to pull apart language model representations: Sparse Autoencoders / Towards Monosemanticity / Scaling Monosemanticity Finding & teaching concepts in superhuman systems: Acquisition of Chess Knowledge in AlphaZero / Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero Connecting microscopic learning to macroscopic phenomena: The Quantization Model of Neural Scaling Understanding at scale: Language models can explain neurons in language models Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. CHAPTERS: (00:00:00) About the Show (00:00:22) About the Episode (00:03:52) Introduction and Background (00:08:43) State of Interpretability Research (00:12:06) Key Insights in Interpretability (00:16:53) Polysemanticity and Model Compression (Part 1) (00:17:00) Sponsors: Oracle | Brave (00:19:04) Polysemanticity and Model Compression (Part 2) (00:22:50) Sparse Autoencoders Explained (00:27:19) Challenges in Interpretability Research (Part 1) (00:30:54) Sponsors: Omneky | Squad (00:32:41) Challenges in Interpretability Research (Part 2) (00:33:51) Goodfire's Vision and Mission (00:37:08) Interpretability and Scientific Models (00:43:48) Architecture and Interpretability Techniques (00:50:08) Quantization and Model Representation (00:54:07) Future of Interpretability Research (01:01:38) Skepticism and Challenges in Interpretability (01:07:51) Alternative Architectures and Universality (01:13:39) Goodfire's Business Model and Funding (01:18:47) Building the Team and Future Plans (01:31:03) Hiring and Getting Involved in Interpretability (01:51:28) Closing Remarks (01:51:38) Outro

The Ted O'Neill Program
08-09-2024 The Magic of the Mechanistic

The Ted O'Neill Program

Play Episode Listen Later Aug 9, 2024 13:01


Coach Ted talks about the artistry in mechanistic methods.

The Nonlinear Library
LW - Efficient Dictionary Learning with Switch Sparse Autoencoders by Anish Mudide

The Nonlinear Library

Play Episode Listen Later Jul 22, 2024 20:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Efficient Dictionary Learning with Switch Sparse Autoencoders, published by Anish Mudide on July 22, 2024 on LessWrong. Produced as part of the ML Alignment & Theory Scholars Program - Summer 2024 Cohort 0. Summary To recover all the relevant features from a superintelligent language model, we will likely need to scale sparse autoencoders (SAEs) to billions of features. Using current architectures, training extremely wide SAEs across multiple layers and sublayers at various sparsity levels is computationally intractable. Conditional computation has been used to scale transformers (Fedus et al.) to trillions of parameters while retaining computational efficiency. We introduce the Switch SAE, a novel architecture that leverages conditional computation to efficiently scale SAEs to many more features. 1. Introduction The internal computations of large language models are inscrutable to humans. We can observe the inputs and the outputs, as well as every intermediate step in between, and yet, we have little to no sense of what the model is actually doing. For example, is the model inserting security vulnerabilities or backdoors into the code that it writes? Is the model lying, deceiving or seeking power? Deploying a superintelligent model into the real world without being aware of when these dangerous capabilities may arise leaves humanity vulnerable. Mechanistic interpretability (Olah et al.) aims to open the black-box of neural networks and rigorously explain the underlying computations. Early attempts to identify the behavior of individual neurons were thwarted by polysemanticity, the phenomenon in which a single neuron is activated by several unrelated features (Olah et al.). Language models must pack an extremely vast amount of information (e.g., the entire internet) within a limited capacity, encouraging the model to rely on superposition to represent many more features than there are dimensions in the model state (Elhage et al.). Sharkey et al. and Cunningham et al. propose to disentangle superimposed model representations into monosemantic, cleanly interpretable features by training unsupervised sparse autoencoders (SAEs) on intermediate language model activations. Recent work (Templeton et al., Gao et al.) has focused on scaling sparse autoencoders to frontier language models such as Claude 3 Sonnet and GPT-4. Despite scaling SAEs to 34 million features, Templeton et al. estimate that they are likely orders of magnitude short of capturing all features. Furthermore, Gao et al. train SAEs on a series of language models and find that larger models require more features to achieve the same reconstruction error. Thus, to capture all relevant features of future large, superintelligent models, we will likely need to scale SAEs to several billions of features. With current methodologies, training SAEs with billions of features at various layers, sublayers and sparsity levels is computationally infeasible. Training a sparse autoencoder generally consists of six major computations: the encoder forward pass, the encoder gradient, the decoder forward pass, the decoder gradient, the latent gradient and the pre-bias gradient. Gao et al. introduce kernels and tricks that leverage the sparsity of the TopK activation function to dramatically optimize all computations excluding the encoder forward pass, which is not (yet) sparse. After implementing these optimizations, Gao et al. attribute the majority of the compute to the dense encoder forward pass and the majority of the memory to the latent pre-activations. No work has attempted to accelerate or improve the memory efficiency of the encoder forward pass, which remains the sole dense matrix multiplication. In a standard deep learning model, every parameter is used for every input. An alternative approach is conditional computatio...

The Nonlinear Library
AF - An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 by Neel Nanda

The Nonlinear Library

Play Episode Listen Later Jul 7, 2024 38:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2, published by Neel Nanda on July 7, 2024 on The AI Alignment Forum. This post represents my personal hot takes, not the opinions of my team or employer. This is a massively updated version of a similar list I made two years ago There's a lot of mechanistic interpretability papers, and more come out all the time. This can be pretty intimidating if you're new to the field! To try helping out, here's a reading list of my favourite mech interp papers: papers which I think are important to be aware of, often worth skimming, and something worth reading deeply (time permitting). I've annotated these with my key takeaways, what I like about each paper, which bits to deeply engage with vs skim, etc. I wrote a similar post 2 years ago, but a lot has changed since then, thus v2! Note that this is not trying to be a comprehensive literature review - this is my answer to "if you have limited time and want to get up to speed on the field as fast as you can, what should you do". I'm deliberately not following academic norms like necessarily citing the first paper introducing something, or all papers doing some work, and am massively biased towards recent work that is more relevant to the cutting edge. I also shamelessly recommend a bunch of my own work here, sorry! How to read this post: I've bolded the most important papers to read, which I recommend prioritising. All of the papers are annotated with my interpretation and key takeaways, and tbh I think reading that may be comparable good to skimming the paper. And there's far too many papers to read all of them deeply unless you want to make that a significant priority. I recommend reading all my summaries, noting the papers and areas that excite you, and then trying to dive deeply into those. Foundational Work A Mathematical Framework for Transformer Circuits (Nelson Elhage et al, Anthropic) - absolute classic, foundational ideas for how to think about transformers (see my blog post for what to skip). See my youtube tutorial (I hear this is best watched after reading the paper, and adds additional clarity) Deeply engage with: All the ideas in the overview section, especially: Understanding the residual stream and why it's fundamental. The notion of interpreting paths between interpretable bits (eg input tokens and output logits) where the path is a composition of matrices and how this is different from interpreting every intermediate activations And understanding attention heads: what a QK and OV matrix is, how attention heads are independent and additive and how attention and OV are semi-independent. Skip Trigrams & Skip Trigram bugs, esp understanding why these are a really easy thing to do with attention, and how the bugs are inherent to attention heads separating where to attend to (QK) and what to do once you attend somewhere (OV) Induction heads, esp why this is K-Composition (and how that's different from Q & V composition), how the circuit works mechanistically, and why this is too hard to do in a 1L model Skim or skip: Eigenvalues or tensor products. They have the worst effort per unit insight of the paper and aren't very important. Superposition Superposition is a core principle/problem in model internals. For any given activation (eg the output of MLP13), we believe that there's a massive dictionary of concepts/features the model knows of. Each feature has a corresponding vector, and model activations are a sparse linear combination of these meaningful feature vectors. Further, there are more features in the dictionary than activation dimensions, and they are thus compressed in and interfere with each other, essentially causing cascading errors. This phenomena of compression is called superposition. Toy models of superpositio...

Under the Influence with Martin Harvey
Lessons from going from mechanistic to vitalistic practice with Dr Glen Duffy

Under the Influence with Martin Harvey

Play Episode Listen Later Jun 26, 2024 56:32


In the world of chiropractic, the philosophy is our compass—it's the 'why' that shapes the 'what' and 'how' of our practice. But what happens when your 'why' evolves? How do you transform your approach to stay true to your new understanding? Join us in this episode as Martin sits down with Glenn Duffy, a chiropractor who has navigated this very journey. Glenn transitioned from running a successful mechanistic practice in the UK to embracing a vitalistic approach in Spain. Along the way, he's authored books that illuminate the true value of chiropractic and holistic health. Tune in to explore Glenn's inspiring shift and discover insights that could revolutionize your own practice. Check out Glenn's book https://www.amazon.com.au/Befriend-Gravity-Commonsense-Approach-Rebuild-ebook/dp/B0BLHYVHQL/ref=tmm_kin_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.SsegccMj6tr2fz90QioSZ8pY2uCPlfpodwMwzbhI0Vim569As_aIfPnUv82z3w9yC2kqE9MvjXpxu9riYy5MFAa7c5J2N17X1nrCKzWWakIAVnFqbCDwNM3Gm-2VXGP4MHvYn75Us2NZ9zFcHyTLJw.PTfZ6_6PH59ApdHWEFMo60NXapi-rqEpmLe8pPpfXm8&qid=1718696930&sr=8-1 To learn more and register for the ACA Conference 27-28 July https://www.chiropractors.org.au/event/aca-conference-2024-cairns To learn more about the Retention Recipe 2.0 https://insideoutpractices.thinkific.com/courses/retention-recipe-2-0 Check out Certainty 2.0 https://insideoutpractices.thinkific.com/courses/certainty-2-0 To listen to my other podcast "Cut The Crap" on Spotify https://open.spotify.com/show/6CNsDhnm93RXHNCIudicoc?si=da7bda2c44794d36 To listen on Apple Podcasts: https://podcasts.apple.com/au/podcast/cut-the-crap-with-craig-and-martin/id1744483428 Email me - martin@insideoutpractices.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/martin-harvey/message

The Nonlinear Library
AF - Compact Proofs of Model Performance via Mechanistic Interpretability by Lawrence Chan

The Nonlinear Library

Play Episode Listen Later Jun 24, 2024 12:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Compact Proofs of Model Performance via Mechanistic Interpretability, published by Lawrence Chan on June 24, 2024 on The AI Alignment Forum. We recently released a paper on using mechanistic interpretability to generate compact formal guarantees on model performance. In this companion blog post to our paper, we'll summarize the paper and flesh out some of the motivation and inspiration behind our work. Paper abstract In this work, we propose using mechanistic interpretability - techniques for reverse engineering model weights into human-interpretable algorithms - to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving lower bounds on the accuracy of 151 small transformers trained on a Max-of-K task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance. Introduction One hope for interpretability is that as we get AGI, we'll be able to use increasingly capable automation to accelerate the pace at which we can interpret ever more powerful models. These automatically generated interpretations need to satisfy two criteria: 1. Compression: Explanations compress the particular behavior of interest. Not just so that it fits in our heads, but also so that it generalizes well and is feasible to find and check. 2. Correspondence (or faithfulness): Explanations must accurately reflect the actual model mechanisms we aim to explain, allowing us to confidently constrain our models for guarantees or other practical applications. Progress happens best when there are clear and unambiguous targets and quantitative metrics. For correspondence, the field has developed increasingly targeted metrics for measuring performance: ablations, patching, and causal scrubbing. In our paper, we use mathematical proof to ensure correspondence, and present proof length as the first quantitative measure of explanation compression that is theoretically grounded, less subject to human judgement, and avoids trivial Goodharting. We see our core contributions in the paper as: 1. We push informal mechanistic interpretability arguments all the way to proofs of generalization bounds on toy transformers trained on the Max-of-$K$ task. This is a first step in getting formal guarantees about global properties of specific models, which is the approach of post-hoc mechanistic interpretability. 2. We introduce compactness of proof as a metric on explanation compression. We find that compactifying proofs requires deeper understanding of model behavior, and more compact proofs of the same bound necessarily encode more understanding of the model. 3. It is a common intuition that "proofs are hard for neural networks", and we flesh this intuition out as the problem of efficiently reasoning about structureless noise, which is an artifact of explanations being lossy approximations of the model's learned weights. While we believe that the proofs themselves (and in particular our proof which achieves a length that is linear in the number of model parameters for the parts of the model we understand adequately) may be of particular interest to those interested in guarantees, we believe that the insights about explanation compression from this methodology and our results are applicable more broadly to the field of mechanistic interpretability. Cor...

The Nonlinear Library
LW - Rational Animations' intro to mechanistic interpretability by Writer

The Nonlinear Library

Play Episode Listen Later Jun 15, 2024 16:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rational Animations' intro to mechanistic interpretability, published by Writer on June 15, 2024 on LessWrong. In our new video, we talk about research on interpreting InceptionV1, a convolutional neural network. Researchers have been able to understand the function of neurons and channels inside the network and uncover visual processing algorithms by looking at the weights. The work on InceptionV1 is early but landmark mechanistic interpretability research, and it functions well as an introduction to the field. We also go into the rationale and goals of the field and mention some more recent research near the end. Our main source material is the circuits thread in the Distill journal and this article on feature visualization. The author of the script is Arthur Frost. I have included the script below, although I recommend watching the video since the script has been written with accompanying moving visuals in mind. Intro In 2018, researchers trained an AI to find out if people were at risk of heart conditions based on pictures of their eyes, and somehow the AI also learned to tell people's biological sex with incredibly high accuracy. How? We're not entirely sure. The crazy thing about Deep Learning is that you can give an AI a set of inputs and outputs, and it will slowly work out for itself what the relationship between them is. We didn't teach AIs how to play chess, go, and atari games by showing them human experts - we taught them how to work it out for themselves. And the issue is, now they have worked it out for themselves, and we don't know what it is they worked out. Current state-of-the-art AIs are huge. Meta's largest LLaMA2 model uses 70 billion parameters spread across 80 layers, all doing different things. It's deep learning models like these which are being used for everything from hiring decisions to healthcare and criminal justice to what youtube videos get recommended. Many experts believe that these models might even one day pose existential risks. So as these automated processes become more widespread and significant, it will really matter that we understand how these models make choices. The good news is, we've got a bit of experience uncovering the mysteries of the universe. We know that humans are made up of trillions of cells, and by investigating those individual cells we've made huge advances in medicine and genetics. And learning the properties of the atoms which make up objects has allowed us to develop modern material science and high-precision technology like computers. If you want to understand a complex system with billions of moving parts, sometimes you have to zoom in. That's exactly what Chris Olah and his team did starting in 2015. They focused on small groups of neurons inside image models, and they were able to find distinct parts responsible for detecting everything from curves and circles to dog heads and cars. In this video we'll Briefly explain how (convolutional) neural networks work Visualise what individual neurons are doing Look at how neurons - the most basic building blocks of the neural network - combine into 'circuits' to perform tasks Explore why interpreting networks is so hard There will also be lots of pictures of dogs, like this one. Let's get going. We'll start with a brief explanation of how convolutional neural networks are built. Here's a network that's trained to label images. An input image comes in on the left, and it flows along through the layers until we get an output on the right - the model's attempt to classify the image into one of the categories. This particular model is called InceptionV1, and the images it's learned to classify are from a massive collection called ImageNet. ImageNet has 1000 different categories of image, like "sandal" and "saxophone" and "sarong" (which, if you don't know, is a k...

Mixture of Experts
Episode 7: Apple's WWDC24 reactions and mechanistic interpretability

Mixture of Experts

Play Episode Listen Later Jun 14, 2024 39:41


In Episode 7 of Mixture of Experts, host Tim Hwang is joined by Shobhit Varshney, Skyler Speakman, and Kaoutar El Maghaoui. Today, the experts react to Apple's WWDC24 announcements. Is Apple late to the AI game? Then, part 2 on interpretability this week, as OpenAI released their study mechanistic interpretability. The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

Rheumnow Podcast
Mechanistic Promise in RA Doesn't Always Mean Actual Gain

Rheumnow Podcast

Play Episode Listen Later Jun 13, 2024 4:07


Dr. David Liew reports on abstracts OP0007 and OP0069 at Eular 2024 in Vienna, Austria.

The Bare Performance Podcast
068: Debunking Nutrition Myths, A Different Approach To Pain Management & Exposing Fitness Lies With Layne Norton

The Bare Performance Podcast

Play Episode Listen Later May 27, 2024 143:03


This week, I am excited to have Layne Norton with me on the podcast. Layne has been a huge inspiration and a source of knowledge since my health and fitness journey began. After years of study and research, he obtained a PhD in nutritional sciences, and his extensive knowledge is evident in our conversation. We'll be delving into topics ranging from the science and psychology of pain to navigating misinformation to the science behind our eating habits. You're bound to come away from this episode smarter than before. Save 10% at BPN Supps: https://bit.ly/nickbare10audio Follow for more:  IG: https://www.instagram.com/nickbarefitness/ YT: https://www.youtube.com/@nickbarefitness Keep up with Layne:IG: https://www.instagram.com/biolayne/ Topics: 0:00 Intro 0:47 Welcome 5:13 The science of pain 19:21 Injuries during training tapers 25:13 Consistency is an equalizer 36:34 Mechanistic studies 46:51 Do the research 54:51 Identifying who an expert is 59:06 Why we're addicted to negativity 1:04:41 Managing the misinformation 1:15:47 Types of testing and research 1:25:10 Intermittent fasting 1:39:37 Eating habits 1:32:21 Cell autophagy and fasting 1:36:40 Blood sugar levels 1:45:04 Eating frequency 1:48:48 Tracking serving sizes 1:54:34 Stepping over rocks to pick up pebbles 2:10:00 Deadlifting

The Open Source AI Question - Part 2 | Robert Wright & Nathan Labenz

Play Episode Listen Later May 15, 2024 48:19


Dive into an in-depth conversation with Nathan and Robert Wright as they discuss AI's transformative potential, mechanistic interpretability, and the sobering realities of AI alignment research. Learn about the defensive strategies and safety measures necessary for managing advanced AI risks in an open source world. Don't miss the insights on AI-powered VR, and be sure to check out part one on the non-zero feed. Checkout the Part 1 of the conversation here : https://www.youtube.com/watch?v=s8bgB8TCdBs SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention "Turpentine" to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00:00) Introduction (00:07:13) AI in Governance (00:11:08) Sci-fi doomer (00:13:58) Sponsors: Oracle | Brave (00:16:05) The frontier models (00:20:22) Emergent behavior (00:23:48) Theory of mind (00:28:09) Mechanistic interpretability (00:34:12) Sponsors: Squad | Omneky (00:38:12) AI Alignment Techniques (00:42:38) The Sweet Spot of AI

The Nonlinear Library
AF - Mechanistic Interpretability Workshop Happening at ICML 2024! by Neel Nanda

The Nonlinear Library

Play Episode Listen Later May 3, 2024 1:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mechanistic Interpretability Workshop Happening at ICML 2024!, published by Neel Nanda on May 3, 2024 on The AI Alignment Forum. Announcing the first academic Mechanistic Interpretability workshop, held at ICML 2024! We'd love to get papers submitted if any of you have relevant projects! Deadline May 29, max 4 or max 8 pages. We welcome anything that brings us closer to a principled understanding of model internals, even if it's not "traditional" mech interp. Check out our website for example topics! There's $1750 in best paper prizes. We also welcome less standard submissions, like open source software, models or datasets, negative results, distillations, or position pieces. And if anyone is attending ICML, you'd be very welcome at the workshop! We have a great speaker line-up: Chris Olah, Jacob Steinhardt, David Bau and Asma Ghandeharioun. And a panel discussion, hands-on tutorial, and social. I'm excited to meet more people into mech interp! And if you know anyone who might be interested in attending/submitting, please pass this on. Twitter thread, Website Thanks to my great co-organisers: Fazl Barez, Lawrence Chan, Kayo Yin, Mor Geva, Atticus Geiger and Max Tegmark Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - A gentle introduction to mechanistic anomaly detection by Erik Jenner

The Nonlinear Library

Play Episode Listen Later Apr 4, 2024 16:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A gentle introduction to mechanistic anomaly detection, published by Erik Jenner on April 4, 2024 on LessWrong. TL;DR: Mechanistic anomaly detection aims to flag when an AI produces outputs for "unusual reasons." It is similar to mechanistic interpretability but doesn't demand human understanding. I give a self-contained introduction to mechanistic anomaly detection from a slightly different angle than the existing one by Paul Christiano (focused less on heuristic arguments and drawing a more explicit parallel to interpretability). Mechanistic anomaly detection was first introduced by the Alignment Research Center (ARC), and a lot of this post is based on their ideas. However, I am not affiliated with ARC; this post represents my perspective. Introduction We want to create useful AI systems that never do anything too bad. Mechanistic anomaly detection relaxes this goal in two big ways: Instead of eliminating all bad behavior from the start, we're just aiming to flag AI outputs online. Instead of specifically flagging bad outputs, we flag any outputs that the AI produced for "unusual reasons." These are serious simplifications. But strong methods for mechanistic anomaly detection (or MAD for short) might still be important progress toward the full goal or even achieve it entirely: Reliably flagging bad behavior would certainly be a meaningful step (and perhaps sufficient if we can use the detector as a training signal or are just fine with discarding some outputs). Not all the cases flagged as unusual by MAD will be bad, but the hope is that the converse holds: with the right notion of "unusual reasons," all bad cases might involve unusual reasons. Often we may be fine with flagging more cases than just the bad ones, as long as it's not excessive. I intentionally say "unusual reasons for an output" rather than "unusual inputs" or "unusual outputs." Good and bad outputs could look indistinguishable to us if they are sufficiently complex, and inputs might have similar problems. The focus on mechanistic anomalies (or "unusual reasons") distinguishes MAD from other out-of-distribution or anomaly detection problems. Because of this, I read the name as "[mechanistic anomaly] detection" - it's about detecting mechanistic anomalies rather than detecting any anomalies with mechanistic means. One intuition pump for mechanistic anomaly detection comes from mechanistic interpretability. If we understand an AI system sufficiently well, we should be able to detect, for example, when it thinks it's been deployed and executes a treacherous turn. The hope behind MAD is that human understanding isn't required and that we can detect cases like this as "mechanistically anomalous" without any reference to humans. This might make the problem much easier than if we demand human understanding. The Alignment Research Center (ARC) is trying to formalize "reasons" for an AI's output using heuristic arguments. If successful, this theoretical approach might provide an indefinitely scalable solution to MAD. Collaborators and I are working on a more empirical approach to MAD that is not centered on heuristic arguments, and this post gives a self-contained introduction that might be more suitable to that perspective (and perhaps helpful for readers with an interpretability background). Thanks to Viktor Rehnberg, Oliver Daniels-Koch, Jordan Taylor, Mark Xu, Alex Mallen, and Lawrence Chan for feedback on a draft! Mechanistic anomaly detection as an alternative to interpretability: a toy example As a toy example, let's start with the SmartVault setting from the ELK report. SmartVault is a vault housing a diamond that we want to protect from robbers. We would like an AI to use various actuators to keep the diamond safe by stopping any robbers. There is a camera pointed at the diamond, which we want to u...

The Nonlinear Library
AF - Sparsify: A mechanistic interpretability research agenda by Lee Sharkey

The Nonlinear Library

Play Episode Listen Later Apr 3, 2024 40:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sparsify: A mechanistic interpretability research agenda, published by Lee Sharkey on April 3, 2024 on The AI Alignment Forum. Over the last couple of years, mechanistic interpretability has seen substantial progress. Part of this progress has been enabled by the identification of superposition as a key barrier to understanding neural networks ( Elhage et al., 2022) and the identification of sparse autoencoders as a solution to superposition ( Sharkey et al., 2022; Cunningham et al., 2023; Bricken et al., 2023). From our current vantage point, I think there's a relatively clear roadmap toward a world where mechanistic interpretability is useful for safety. This post outlines my views on what progress in mechanistic interpretability looks like and what I think is achievable by the field in the next 2+ years. It represents a rough outline of what I plan to work on in the near future. My thinking and work is, of course, very heavily inspired by the work of Chris Olah, other Anthropic researchers, and other early mechanistic interpretability researchers. In addition to sharing some personal takes, this article brings together - in one place - various goals and ideas that are already floating around the community. It proposes a concrete potential path for how we might get from where we are today in mechanistic interpretability to a world where we can meaningfully use it to improve AI safety. Key frameworks for understanding the agenda Framework 1: The three steps of mechanistic interpretability I think of mechanistic interpretability in terms of three steps: The three steps of mechanistic interpretability[1]: Mathematical description: In the first step, we break the neural network into constituent parts, where the parts are simply unlabelled mathematical objects. These may be e.g. neurons, polytopes, circuits, feature directions (identified using SVD/NMF/SAEs), individual parameters, singular vectors of the weight matrices, or other subcomponents of a network. Semantic description: Next, we generate semantic interpretations of the mathematical object (e.g. through feature labeling). In other words, we try to build a conceptual model of what each component of the network does. Validation: We need to validate our explanations to ensure they make good predictions about network behavior. For instance, we should be able to predict that ablating a feature with a purported 'meaning' (such as the 'noun gender feature') will have certain predictable effects that make sense given its purported meaning (such as the network becoming unable to assign the appropriate definitive article to nouns). If our explanations can't be validated, then we need to identify new mathematical objects and/or find better semantic descriptions. The field of mechanistic interpretability has repeated this three-step cycle a few times, cycling through explanations given in terms of neurons, then other objects such as SVD/NMF directions or polytopes, and most recently SAE directions. My research over the last couple of years has focused primarily on identifying the right mathematical objects for mechanistic explanations. I expect there's still plenty of work to do on this step in the next two years or so (more on this later). To guide intuitions about how I plan to pursue this, it's important to understand what makes some mathematical objects better than others. For this, we have to look at the description accuracy vs. description length tradeoff. Framework 2: The description accuracy vs. description length tradeoff You would feel pretty dissatisfied if you asked someone for a mechanistic explanation of a neural network and they proceeded to read out of the float values of the weights. But why is this dissatisfying? Two reasons: When describing the mechanisms of any system, be it an engine, a solar system, o...

The Nonlinear Library
AF - Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems by Sonia Joseph

The Nonlinear Library

Play Episode Listen Later Mar 13, 2024 26:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems, published by Sonia Joseph on March 13, 2024 on The AI Alignment Forum. Join our Discord here. This article was written by Sonia Joseph, in collaboration with Neel Nanda, and incubated in Blake Richards's lab at Mila and in the MATS community. Thank you to the Prisma core contributors, including Praneet Suresh, Rob Graham, and Yash Vadi. Full acknowledgements of contributors are at the end. I am grateful to my collaborators for their guidance and feedback. Outline Part One: Introduction and Motivation Part Two: Tutorial Notebooks Part Three: Brief ViT Overview Part Four: Demo of Prisma's Functionality Key features, including logit attribution, attention head visualization, and activation patching. Preliminary research results obtained using Prisma, including emergent segmentation maps and canonical attention heads. Part Five: FAQ, including Key Differences between Vision and Language Mechanistic Interpretability Part Six: Getting Started with Vision Mechanistic Interpretability Part Seven: How to Get Involved Part Eight: Open Problems in Vision Mechanistic Interpretability Introducing the Prisma Library for Multimodal Mechanistic Interpretability I am excited to share with the mechanistic interpretability and alignment communities a project I've been working on for the last few months. Prisma is a multimodal mechanistic interpretability library based on TransformerLens, currently supporting vanilla vision transformers (ViTs) and their vision-text counterparts CLIP. With recent rapid releases of multimodal models, including Sora, Gemini, and Claude 3, it is crucial that interpretability and safety efforts remain in tandem. While language mechanistic interpretability already has strong conceptual foundations, many research papers, and a thriving community, research in non-language modalities lags behind. Given that multimodal capabilities will be part of AGI, field-building in mechanistic interpretability for non-language modalities is crucial for safety and alignment. The goal of Prisma is to make research in mechanistic interpretability for multimodal models both easy and fun. We are also building a strong and collaborative open source research community around Prisma. You can join our Discord here. This post includes a brief overview of the library, fleshes out some concrete problems, and gives steps for people to get started. Prisma Goals Build shared infrastructure (Prisma) to make it easy to run standard language mechanistic interpretability techniques on non-language modalities, starting with vision. Build shared conceptual foundation for multimodal mechanistic interpretability. Shape and execute on research agenda for multimodal mechanistic interpretability. Build an amazing multimodal mechanistic interpretability subcommunity, inspired by current efforts in language. Set the cultural norms of this subcommunity to be highly collaborative, curious, inventive, friendly, respectful, prolific, and safety/alignment-conscious. Encourage sharing of early/scrappy research results on Discord/Less Wrong. Co-create a web of high-quality research. Tutorial Notebooks To get started, you can check out three tutorial notebooks that show how Prisma works. Main ViT Demo Overview of main mechanistic interpretability technique on a ViT, including direct logit attribution, attention head visualization, and activation patching. The activation patching switches the net's prediction from tabby cat to Border collie with a minimum ablation. Emoji Logit Lens Deeper dive into layer- and patch-level predictions with interactive plots. Interactive Attention Head Tour Deeper dive into the various types of attention heads a ViT contains with interactive JavaScript. Brief ViT Overview A vision transf...

Interplace
AI and Neuro-Narratives: Moving Beyond Mechanistic Minds

Interplace

Play Episode Listen Later Feb 21, 2024 14:45


Hello Interactors,All the talk and evidence of AI, chips in the brain, and robotic overlords has created emotions ranging from hysteria to malaise to clinical depression. How much of this is caused or influenced by narratives spun by favored voices telling tall tales of proximal parables and are there other ways to think about our brain than just a processor? Let's find out…THE MENTAL MYTHS OF SILICON AND SYNAPSESOur brain is an energy intense organ. It consumes 20% of our energy but accounts for just 2% of our body weight. To manage this high demand for energy, the brain employs various strategies to simplify tasks and processes. One of those is to simplify how the world works. Like dividing it into discernable individual component parts.In a world increasingly seduced by these crisp edges of in groups and out groups, there exists a tribe of techno-optimists, guardians of an old tale, who look to the brain as humanity's ultimate processor and a promise and desire for digital immortality. This romanticized notion of the “mind as computer” is facing competition as feats of AI reveal a seemingly superior capability to their own self-assuming super-intelligence. So, they want their outdated hardware upgraded. It's all positioned as cutting edge and futuristic but harks back to the clockwork dualistic and mechanistic universe of the Enlightenment.We've been preached a digital gospel that suggests the warm wetware within our skulls operates like baked silicon chips, crunching data of daily existence with the cold precision of a CPU. Yet, simmering in the biochemistry that hosts these digital dreams are ripples of evidence captured and crunched by computers and displayed in the form of MRI's, fMRI's, PET scans, SPECT scans, NIRS, and MEG's. These images lead some cognitive scientists, with the help of various forms of AI, to slowly dismantle the mechanistic metaphor of ‘the brain as CPU', piece by intricate piece.The metaphor of the brain functioning as a processor is as old as Alan Turing and the mid-20th century computational theories that birthed computer science. These ideas and experiments propagated as mass media proliferated and now serve as common conceptions of how the mind works. Other historical and cultural factors contribute to the persistence of this metaphor and perpetuated among teachers, scientists, and attention seeking tech moguls.But it was centuries before, during the Enlightenment and the scientific revolution, that a significant shift towards rationalist, determinist, and mechanistic views of nature were put forth by figures like René Descartes and Isaac Newton. The world and its phenomena, including human beings and human thought, began to be understood in terms of mechanical laws and principles, laying the groundwork for comparing the brain to a machine.The advancements in machinery and technology during the Industrial Revolution further reinforced the mechanistic view of life processes, including human cognition, making it easier to draw parallels between the operations of machines and the functions of the human brain. I recently wrote about Mary Shelley's Frankenstein as a prime example from that period.Fast forwarding a century later, to the 1970s, I remember watching the “Six Million Dollar Man” on TV as a kid. This show was based on a Martin Caidin novel called Cyborg depicting an astronaut who survives a plane crash and is brought to life by replacing body parts with robotics. The “Six Million Dollar Man” was soon joined by “The Bionic Woman” and episodes that featured the faces of human robots being ripped off to reveal a computer inside. Naturally, these two computer-powered bionic superpowers worked as secret agents in U.S. Office of…wait for it…“Scientific Intelligence.”Source: YouTubeThis was all occurring alongside emerging discoveries in artificial intelligence and cognitive science, further cementing the brain-CPU analogy. Like science fiction writers and directors, early AI researchers and scientists aimed to replicate human cognitive processes in computers, leading to conceptual overlaps between how brains and computers function in science and society.The CPU metaphor provides a simplified way to understand the complex workings of the brain, making it accessible to people without specialized knowledge in neuroscience or cognitive science. This metaphor continues to be used in educational contexts to teach basic concepts about brain functions, reinforcing its prevalence.The tendency toward reductionism — to reduce complex phenomena to their simplest components — is present in many scientific and engineering disciplines and has long contributed to the organ-as-part metaphor. Viewing the brain as akin to a computer's CPU aligns with reductionist approaches reminiscent of those early Enlightenment thinkers seeking to understand biological systems by dissecting their individual parts and drawing useful, but also isolated and simplified conclusions.While the brain-CPU metaphor has historical roots and provides a convenient framework for understanding some aspects of cognitive function, many believe it is ultimately flawed. It can overlook the brain's integrated and dynamic nature, its entanglement within a larger biological organism, and its continuous interaction with a complex environment. These are themes under exposed and under explained in popular science, media, and most of the tech industry.The growing recognition of these limitations, particularly within fields like 4E cognitive science (embodied, embedded, enactive, and extended cognition), is leading to the development of more nuanced and holistic models of cognition that transcend simplistic mechanical analogies.Do we have the energy to spare our brain so we may better understand it?EMBODIED MINDS EMBEDDED, EXTENDED, AND ENACTIVE The 4E framework in cognitive science highlights the brain's integrated and dynamic nature. Advances in neuroscience have shown that the brain is not a static organ with fixed functions but is highly malleable, capable of reorganizing itself in response to learning and experience. This plasticity allows for adaptability and resilience necessary for its survival, characteristics not accounted for in the rigid structure of a CPU.The brain's structure is composed of complex, interconnected networks that support a wide range of functions, from basic sensory processing to higher-order cognitive tasks. These networks do not operate in isolation but are dynamically interacting and reconfiguring based on internal and external demands.The brain's function is also modulated by a variety of neurotransmitters that influence mood, cognition, and behavior. This biochemical layer adds a level of complexity to brain function that is absent in computer CPUs today. This means the brain is intimately connected to the biology of the body, receiving continuous sensory inputs and sending commands to our organs and limbs. This sensory-motor coupling is foundational to cognition, emphasizing the role of bodily interactions with the world and how our brain processes it.Research supports this concept. Cognition is something that is embodied in us, where cognitive processes are grounded in sensory and motor systems. For example, studies on mirror neurons suggest that understanding others' actions involves simulating these actions in our own sensory-motor systems.For instance, when a child observes an adult using a tool, such as a hammer, the mirror neurons associated with the motion of hammering may fire in the child's brain, despite the child not physically performing the action. This neural activity can help the child understand the action and later replicate it, contributing to the learning process.Another example is in the understanding of emotions. When we see someone smiling or frowning, our mirror neuron system may activate the same facial muscles involved in smiling or frowning, contributing to an empathetic response. This internal mimicry can help us to 'feel' what the other person is feeling and develop a better understanding of their emotional state.The brain is in continuous interaction with the complex environment in which we exist. It is embedded in an environment that it continuously interacts with, influencing and being influenced by it. This interaction is not merely passive; the brain actively constructs perceptions and meanings based on environmental inputs.This enactive perspective posits cognition arises through a dynamic interplay between an organism and its environment. Cognitive processes such as perception and action are therefore inseparable and co-determined. In the example of a child learning to use a hammer, they learn to grasp the handle not just by observing but through a process of trial and error. This involves actively engaging with the object and learning from the outcomes of these interactions thus enacting cognition through interactive processes.These dynamic interactions are extended beyond the brain and body to include tools, like a hammer, but also computers, mobile phones, and automobiles. These tools become part of the mind's cognitive architecture. This perspective challenges traditional notions of cognition as being confined within the boundaries of the individual, proposing instead that objects and devices in our environment can function as extensions of our cognitive system when they are deeply integrated into our mental activities.As the brains of neuroscientists interact with each other, their embedded and embodied brains are synthesizing an ever-evolving understanding of cognition that is more integral than dichotomous, more holistic than dualistic. Even as the brain employs cost-cutting simplification strategies, a rich emergent complexity emerges that further defines our cognitive reality.The old metaphor of the brain as a CPU, once a middle 20th-century marvel, is gradually yielding to a perspective that sees the brain not as a solitary processor but as part of a dynamic, integrated system of organism and environment. As techno-optimists laud AI and digital immortality, praying to dualistic gods, the minds of some neuroscientists are extended by imaging tools powered by CPUs, presenting a model of cognition far from the mechanistic. Instead, they argue our brains are enmeshed in a dynamic and fluid biological existence.It is here, in the flowing network of neurons and scientific narratives, that the future of understanding the human mind is taking shape. Even as I write this and you read it, we are moving our brain from the rigidly digital dualistic understanding to the fluidly enactive. In doing so, our brains are redefining our place within this emergent organism-environment system we call life using as little energy as necessary.References:Knyazev, G. G. (2023). A Paradigm Shift in Cognitive Sciences. Neuroscience and Behavioral Physiology. DOI: 10.1007/s11055-023-01483-9Newen, A., De Bruin, L., & Gallagher, S. (Eds.). (2018). The Oxford handbook of 4E cognition. Oxford University Press.Wikipedia contributors. (n.d.). Enactivism. In Wikipedia. Retrieved February 2024, from https://en.wikipedia.org/wiki/Enactivism This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io

Agile FM
143: Jeffrey Liker

Agile FM

Play Episode Listen Later Feb 6, 2024 34:40


Joe has a book “Agile Kata” in the making, if you like to be the first to know when it launches, please visit www.agilekatabook.com.Transcript: Agile F M radio for the agile community. [00:00:05] Joe Krebs: Thank you for tuning into another episode of Agile FM. Today, I have Dr. Jeffrey Liker with me. You probably know from a, I would say, famous book with the title The Toyota Way. That is a book we want to talk about today a little bit, but there's so much, much more about Jeff, he is a professor of industrial and operations engineering at the University of Michigan.He's president of Liker Lean Advisors, and as I said, he wrote not only the Toyota Way, but he also wrote, if I did the count right, nine other books. That relate to Toyota, and there are two books that more recently were published and we'll have a chance in a different episode to talk about those.One was in June, 2023, Giving Wings to her team with Tilo Schwartz, and we have Engaging the Team at Zingerman's Mail Order and that's more like a comic if you want to see it this way, and he co authored that with Eduardo Lander and Tim Root, so that is the The list of books if I haven't missed anything, but we want to talk a little bit about the Toyota way before we do that.Welcome to the show though, Jeff. [00:01:13] Jeffrey Liker: Thank you. Joe. [00:01:16] Joe Krebs: Awesome. So the Toyota way initially released, I believe 2000, somewhere three, two, somewhere that this book we're talking about is the Toyota way. Second edition. This is also very important. We're talking about the second edition of which, which was released somewhere in the year 2021. Timeframe. [00:01:34] Jeffrey Liker: Yes. Three, about two years old. [00:01:36] Joe Krebs: Yeah. And but there is something that happened in that book that is fundamentally different in, in terms of I, I don't know all the change log and everything, but there's one fundamental change, and that is the inclusion of scientific thinking.[00:01:52] Jeffrey Liker: Right, right. A little over five years ago, Mike Rother than I jointly gave a presentation and the book hit my book Toyota Way was 20 years old. So the 20 year anniversary, and his book to Toyota Kata was, I believe, 10 years old, and. We started talking about the relationship between the two.Mike was one of my students and he had practiced lean transformation for many years and was very familiar with the Toyota way and all the concepts of Toyota and studied Toyota. And then he came up with this thing called Toyota Kata. And I had to kind of struggle to sort of figure out what it was and what he was trying to add to what we know about Toyota.And. What he really did was to reverse engineer what Toyota, we call him Toyota Sensei. Sensei is like a master teacher. So what the Toyota Sensei, who are experts on the Toyota production system, do when they work with a new client outside of Toyota, how do they teach it? And they always teach by doing.And he had a chance to see a lot of companies that these different Toyota masters worked with and their masterpieces. And. Asked the question, What do they have in common? And they're all very successful, like they almost won't even bother working on a project unless they can at least double productivity.And that just happens almost automatically. And so he knew that they got great results. But the question is, what are they doing. And in fact, each of these masters. It has a bit of an ego, and they think that they're doing it the right way and the best way, and nobody else can do it that way, the way they do, but he found an underlying pattern, which he called scientific thinking, and what he noticed is the first thing they do is they grasp, they call it grasp the situation in Toyota, they go in, they see what's going on, they talk to the top leaders, and they ask, what is it that they're trying to accomplish?What is their goal? What is their purpose? What are their goals? Why do they want to learn about lean management? What is their vision for what happened? If they were successful, then they go to the Gemba where the activity is, and it could be a factory that they work. They've worked with where they gave you injections for COVID 19.They've worked with where they made ventilators for COVID 19. They've worked with software houses where they develop software. They don't really care when they will go to the Gemba and they'll see the process and understand the current conditions. So then they'll go back, they'll grasp the situation generally, and then they'll go back and they'll say, here's where you're at.Here's the challenge for you. Yeah. And the challenge is always big, you know, like we will double productivity or we will reduce costs by 30 percent or something pretty big based on the needs of the company may have runaway late deliveries and there's paying a ton for a premium freight.And we'll say we will eliminate all shipping and then they will go back to the Gemba with a team of people from the company. And they will teach them how to see, how to understand the process as it is. And Mike calls this the current condition. And then the people in the company will basically wait and expect answers, solutions from the masters.So what do we do? And the masters will say, that's my question to you. What are you going to do? You see where you are, you see where you want to be. You see all sorts of opportunities. What do you think you should be working on first? And then based on what they say the students say, they they may ask them to go back and look some more.Or they may say, why don't we try it? Usually what these people come in the company, come up with, because it's a big challenge, they come up with a fairly big thing and they, it might be, for example, in a manufacturing facility, moving equipment around and laying it out as a cell and They said a personal last one.Can you do this? And they'll say something like, well, we have to talk to engineering and we have to make sure customers okay with this. We have to line up the maintenance people move the equipment. So, I think we really stretch it. Maybe we could do it in a week. And then the trade master will say, good, I'll be back tomorrow and that like starts the process. Now, of course they can't do it in a day what they might have to do it. They can't get all the approvals. So what the person is trying to get them to do is. You don't have to do a hundred percent in one step. Let's try something that's doable and then see what happens.And then we can learn from it. And then we can think about based on that, what our next step is. Usually what happens is the, like, for example, if they lay out a cell. It'll be a disaster. You'll move the equipment together and they'll realize that the equipment has maintenance issues and it's breaking down and everything stops because they don't have inventory anymore.And usually they can't, they barely make product and the you know, the mentors say, that's okay. Let's start working on the problems down now that we see what the problems are. You were hiding them before. Now let's start working on the problems one by one. So Mike saw that, and he saw it enough times, that he realized that what the, these Master thinkers were doing.We're not teaching tools and methods like most of the Westerners were doing with lean. They were teaching a way of thinking. Yeah. And it was actually very scientific. What's your goal? What's your current condition? Right. You know, fairly precisely with measurements and direct observation. And then let's not try to in one step get to the challenge.Let's break down the problem. And all we really need to understand is our first step. And then after that, our second step, our third step, and each of these steps were structured like experiments. They might ask them, what do you think will happen if we make the cell? And then, you know, the people will say, Oh, well, our productivity will go up or quality will go up.Let's see what happens. Yeah. It's a disaster. Yeah. So what did we learn from that? We learned that we have a lot of problems that we've been hiding. And now we can see the problems we have to solve them. So, and also they're trying to teach the value of running the experiment, learning from it, which then gives you the next step and gives you the next step.So that became the basis for what. Mike call Toyota kata. The other part of it was in the meantime, he was studying about neuroscience and cognitive psychology and how we learn and there's a lot of literature that suggests that none of us are natural scientific thinkers, right? We're driven more by biases and the desire to know things, whether we do or not.So we want a lot of certainty. And we want to be right. We're going to, in fact, fudge the data to make it appear that we're right. That's called confirmation bias, which is really strong in humans. So he realized that to change people, to start to think and act scientifically requires fundamental behavior change.That's right. Yeah. It means changing our habits. And then he asked the question, how do you change habits? And the literature on, on, on cognitive psychology and neuroscience, as well as Practical experience, for example, with coaching sports teams, it all says the same thing, which we have to practice repeatedly with feedback.And it's very common enough times it becomes a new habit. So then he said, asked, how do you, how can we practice scientific thinking? And he said, first, we need a model, which we have, which is challenge current condition, first short term target condition, then experiment, then second target condition and experiment.Then third target condition and experiment. And. Then he said, how can we teach this? And each of those steps has some associated ways of thinking and tools and think practice routines, things to practice. So he laid that out in what he calls the Toyota Kata practice guide, which is pictures and step by step instruction, like, Like a recipe book and he came up with kata, which comes from the martial arts, which mean small practice routines to teach us complex skill by breaking it down and trying the pieces one by one karate.They'll have the first kata and move the second kata until you learn the first kata. That's right. Correctly. So it's an evolution. Yeah, and usually think about, you know, taking a music lesson until you can play the very simple piece. They want. Go on to the next more complicated piece. All right. So, that led to the whole Toyota Kata, which is a model plus the practice routines.And as you practice them, you begin to think more naturally in a scientific way. [00:11:20] Joe Krebs: Right. So what's interesting is so when I started looking at Mike Rothers work right on, on Kata, and obviously I read your first edition, came in to the second edition and it just like became more and more eyeopening is these habit changes or like a habits we have and habits we want to change that's the same in the agile community, right?So we have certain habits of how we. build software or how we release software and go through transformation and all these cultural changes. So it's just like this meta skill. If you want to see it this way, that, that's that's fascinating when I came across this now, I do want to make sure that If I understand this right, this is obviously not that in 2021 Toyota started with scientific thinking.It was there before, right? It is like something that was carved out as something like it should go into the Toyota way as this core thing. So if you look at [00:12:10] Jeffrey Liker: Yeah. So that was the, we ended up giving a presentation where we said Toyota way and Toyota Kata play well together as if there were separate things.And then thinking about some more, I realized that scientific thinking really underlies. What I called in the Toyota way, the four P's of the Toyota way. The first was philosophy, which I refer to as long term systems thinking. And the second is lean processes. The process of trying to work toward one piece flow.And the third is developing people. In problem solving, which is the fourth "P" and I realized these all are connected through scientific thinking, right? And if you're not thinking scientifically, you can't do any of them. For example, you can't be a system thinker. Yeah. If you're a jelly non scientific thinking is reductionist.We assume every individual tool operates on its own. So we implement Kanban to get inventory reductions and we implement standardized work to get productivity improvements. So we're seeing isolated tools as opposed to a whole system, which is what Mike called the Toyota production system. So with that, I then started to rethink the book from the point of view of scientific thinking, being at the center.And also realizing that you can't really talk about lean as if it's a bunch of mechanistic pieces that you individually build and then they just all suddenly fit together. You have to talk about more of an evolutionary learning process. Yeah. Organization. [00:13:48] Joe Krebs: Yeah. This is interesting. So, I have never consulted for Toyota myself but I was told that the word Toyota Kata does not really, it's not a use, it use Toyota.[00:13:58] Jeffrey Liker: That was not their word. It was Mike's. [00:13:59] Joe Krebs: Exactly. Yeah. [00:14:00] Jeffrey Liker: Description from the outside of what he learned in Toyota. And then he went further and say, the Japanese sensei, they tend to be pretty mysterious Yeah. Yeah, it's light. For example, do it tomorrow. Yeah, we'll come and see. So what should I do now? What do you think you should do now?Yeah. So they tend to be mysterious, but he realized that if we want to mass distribute this to people that don't have access to those magical Japanese, we need a very explicit and simple methodology. So he developed in great detail, this methodology that in Toyota, they wouldn't think they had to use because they, what they say is that from the day you enter the company, the culture is so strong.You begin to learn Kaizen. [00:14:49] Joe Krebs: Yeah. Interesting. So, what was that one of the reasons why you decided to call that core scientific thinking, or was it more like, because it's the thinking and not the tool, it's not the pattern [00:15:00] Jeffrey Liker: thinking now it turns out. You go back to the first Japanese pamphlet. Really? It was a document for the first Japanese document that describes the Toyota production system.It says that it's based on scientific thinking. So for people in Toyota, that's not. Unusual. It's not a stretch, but they, and they think of scientific thinking more empirically than theoretically. So there's theoretical science where we just. In the abstract. And then we deduce from that things and we apply the abstract model to a problem.And then there's inductive science where we look at the phenomena and the empirical reality. And then we induce from that principles and solutions. And so in Toyota, they learned that you need very specific solutions to very specific problems. Yeah. Not general solutions to a whole general class of problems.So you need both to some degree, but they're much more focused than most on solving this problem right here, right now. Yeah. So when they see product development in software, we're developing a software program, they see it and maybe they see it as a part of product development, but they're not going to come in and say, here's your 10 step roadmap to great software.They're going to ask, what is your problem? What are you trying to accomplish? What's your goal? Let's go look at your current process. So they want to know the specifics of your situation and your goals. And they want you to learn how to think scientifically, to learn for yourself how to achieve whatever goals you have and adapt and adjust as the environment changes.[00:16:45] Joe Krebs: It is, it's fascinating also when I open up your new book, the second edition, right? There's also a thing where you design a I don't know if that's the content of your masterclass. I do know that you're teaching a lot of masterclasses but it's really the transition from a mechanistic lean, right?Organic lean. And if I go through the list of the organic lean, this is just like, it just translates for me, for somebody who has been now, you know, using, learning, applying Kata thinking more and more it just links like one, one, one to one, like two to the scientific thinking too, right? [00:17:17] Jeffrey Liker: Yeah, the other part is that whatever performance improvement program you have, Whether it's lean or agile or theory of constraints or whatever.If you look at it from what I call mechanistic point of view, then you're trying to fit square pegs in the round holes, you know, your problems, I want your problems to fit into my model. . and the other expression pill uses, if you have a hammer, everything looks like a nail.So, You can apply Lean, you can apply Agile, you can apply Six Sigma mechanistically, or you can apply any of those things organically. You start organically, you start with a problem. You want to engage the people who understand the Gemba the reality the best. And you want to teach them how to think differently about their process.So they developed the skills of problem solving and performance improvement, and you expect to be surprised and you expect that you won't know the answers until you start digging in and trying things, Mechanistic point of view, you, I have the solution and I'm going to sell you the solution, even though I've never been in your place.I've never seen your process. I don't know anything about. Yeah, I have the solution for you. That's kind of taking this abstract solution and assuming it's going to apply in the abstract to any similar type of problem. Staying at that theoretical level. [00:18:49] Joe Krebs: This is this could I want to just take one example.You know, I want to hear your opinion about this as you do teach these classes, right? When you are surrounded by leaders in those workshops, and you do talk about something like, yeah, I find like the right line here. It's not about like organic, Lean. It would be, it's not a project. It's a journey.Right. And I would just like to hear like what kind of responses, what do you hear when you introduce a concept like this, in terms of continuous improvement and it's a journey or it's a. From a cultural perspective, it's not like an initiative that starts here and ends in by the end of March or any arbitrary date you, somebody might pick it as an ongoing activity that obviously shifts from a leadership perspective, entirely the view, like, what did you hear when you challenge people?[00:19:39] Jeffrey Liker: When I teach the masterclass, the people that usually come have titles like director of continuous improvement, vice president operations excellence and then I'll get some people who might be the head of operations or plant manager, but and you're probably this is self selection, but they all agree when we talk about it.That the approach they have used in the past was very mechanistic and the approach that they believe, particularly after they see it in Toyota. So we do this with Toyota is they see the value of engaging all the people and Leaders acting more as coaches than as disciplinarians. And they said, that's what we need.So they, they conclude they want to move toward a more organic approach, but then they also feel a little bit concerned and nervous because I said, you know, my boss's boss expects immediate measurable results from everything we do with lean. And if you're telling me that it takes time, if you're telling me that it takes investment in developing people.And there's a gap, a time gap between the investments we make in developing people, for example, teaching them using Kata and the results that we get, we're going to have a hard time selling that. So what we ended up concluding usually is that you need both, that there is some value in the experts coming in with the tools, eliminating waste and streamlining processes and getting.Quick results on a more expansive part of the organization. Cause these people are coming in with big companies. They might have 30 or 40 or 50 manufacturing plants and the, and that there's a value in piloting within a smaller area, some of the deeper approaches to changing ways of thinking and changing culture with the successes you have in those models.You have something to sell to the senior management, come and see this and see how much better they perform. So that's usually the kind of vision they have is that they have to somehow find a balance. And I have a slide that shows like, the balance of justice and they have to find a balance between the more mechanistic, quick, short term and superficial approach.Deep and a mile wide. And that's deploying the tools and then the more deep one inch wide, a mile deep, the more deep approach to developing people one by one that you would be doing with Kata. So they have to find a balance between those things, and they have to figure that out there through their own scientific thinking journey.They have to figure it out inside their company by trying things by experimenting. So I asked him instead of leaving here with a whole bunch of solutions. that you're going to bring and implement your company, think about one big challenge that would really make a difference. Your ability to deploy lean, sell lean and define that as a challenge.Then the next, what do you do next? And they said, well, we have to solve the problem. Okay. So how do you solve the problem? Do you go back there and say, we need standardized work. We need employ work groups that we saw at Toyota. And they said, no, those are solutions. We have to understand the current condition.First. That's a great, wonderful.[00:23:09] Joe Krebs: Back to scientific thinking.. This is awesome. Your book was initially the first edition came out as we said of. Several years ago, 2002 or something like that. Why do you think at least from the, from an agile perspective there's other terms floating around. I don't want to go into pick any, right, because it's not a complete list necessarily, but why do we.I see like a lack of of these terms actually like being used on a more broader level, right? You have sold so many books and people are looking at this and saying this is wonderful material, but the implementation, it seems to be slow in the transition. Like taking companies to lean or even in, in agile transformations, is it, do you think it has something to do with the the culture, like, like, for example, using Japanese terms or something like that?[00:24:01] Jeffrey Liker: Yeah, I don't think that so much. I mean, I think there is sometimes a sense that since this is a car company and you have a stereotype picture in your mind of what a car company does. And the first thing you often think about is the assembly line, where you have cars running down the assembly line and people are attaching things to the car.And you say, well, that doesn't look anything like what I do, so therefore, it doesn't apply to me. So there's a lot of that, you know, we're different. And it could be anything. It could be that we're a manufacturing company, but we make chemical products. It's not like cars going down the line. Or it could be that we're a finance company and we don't make any physical products.Or it could be that we work with a mining company that does iron ore mining in Australia and we go and we blast and we dig and we have this big batches of stuff. And how do we get to one piece flow? So, the the problem is that you have to shift your thinking from manufacturing. Mechanical solutions.Like I'm trying to look over here to get solutions that apply in an obvious way to me. You have to shift that thinking to there are some general principles here that have been abstract abstracted that I can then bring to my operations and the people who are well trained and lean or in Kata get very comfortable going into any new environment and not knowing What the solutions are, and then digging in and trying to understand the current condition of that operations.So this idea of I think the first easy thing to do is to copy solutions like a template. But if you give me a template, I'll just superimpose on my process and I know what to do. And the harder thing is to take a more abstract concept, like I need to define a challenge., even when I take my classes and I asked them to define a challenge, they struggle, you know, the challenges we want to have a culture of continuous improvement.Well, that's way too abstract. And then if they say, well, the challenges we would want, we'd like five suggestions per employee. That's way too specific. . So finding the right level of the challenge, you know, itself challenging challenges are thinking. And then what do you look at in the current condition?If it doesn't look like a Toyota plant where you can say it takes 60 seconds for each car and we can break down the steps of attaching window wiper into a reach that takes 2 seconds and, you know, They that's their current condition analysis. Your current condition analysis may be very different if you don't have a routine repeating process.But there are ways to understand the current condition in any sort of process. And even and I remember Deming saying that if you don't think you have a process, you can't improve anything. So, that even that idea that, you know, we develop software, and every software project is different. And the process is that we understand what the customer wants, and we do it.There's no process beyond that. You know, so that Just understanding there are processes, there are habits, there are routines that you have and you need to shine a light on them and understand them, and then figure out from where you are how to start to move in the direction. of the ideal model you have in your head.That, you know, it takes a, it takes thinking. Yeah. It's thinking is tiring. [00:27:36] Joe Krebs: Well, that's my recommendation to all of the listeners out there. When I went down the journey and extracted. By doing exactly what you just said, like looking at that material and extracting information saying like, okay, this is not about Toyota.This is not about this. This is about, you know, how would this apply a map to the agile world? I'm just calling it agile Kata out because of the making a dereferencing it to the Toyota. Brand, let's say in this particular case, but the thinking is the same in terms of the scientific thinking, but surrounding it with agile principles and, you know, [00:28:12] Jeffrey Liker: last thing I'd like to say is that as I dug into agile and you and I met.Menlo Innovations, which is kind of a benchmark for Agile and software development. And I looked at what they're doing there, which Richard Sheridan we're doing. And I saw lots of similarities to the Toyota way. And I saw also a lot of similarities to Toyota Kata. And I met, worked on Zingerman's mail order with Tom Root, who is one of the owners.He was originally an IT guy. So the backbone of the mail order business is the IT system. And what I discovered, and I've talked to you and I've talked to a lot of different IT people, what I discovered is that a lot of the concepts of scientific thinking are actually quite natural for programmers, you know, see, if I think in terms, if I say we need to think in terms of systems and how the parts interact, and I go into a manufacturing environment, they might think Treat me like I'm from another planet, but the software guy will say, of course, and then the idea that you have to have a vision for what the software is going to do and understand the customer.And then you have to break that down into small elements of some sort. Call them features, and then you need to develop one feature at a time and then compile them, make sure they work together as a system, and then build the next feature and compile it. And it's a step by step learning process, breaking the big problem into small pieces and then solving each problem one by one.That idea just you know, a software program said, how do you do it any other way? So the high level model of the Kata makes perfect sense. Within the world of software development, but how to do that in a sort of structured systematic way and make it part of the culture and natural.For example at Menlo Innovations, they do unit testing. And if I say unit testing to a software program and say, of course, we know what that is. Let's go and see your program and show me the unit test you've conducted. Exactly. And this kind of, yeah, and it's got to be hopefully more and more examples like this, right?Than than the one you're naming. And so I think somebody might be listening to this. Or reading the second edition of your book of the Toyota way might be building these bridges to whatever environment they are in, right? About in, in the Toyota way I do write about Menlo innovations. And so I, so that's another thing I did in the new edition is add more service examples and software examples and examples from other places, which I didn't have at the time I wrote the original book, I was just describing.So that's in the book. And it. The thinking way is still the same and one of the things that happened with Richard and to tell you the truth is that he started to get kind of turned off by a lot of the agile examples he was seeing, because they like the lean folks were often simply using a tool.Be that agile was almost equivalent to writing things on post it notes. Yeah, and he has all culture. He calls it deliberate culture. He had to develop the whole culture. Out of pairs, paired programming and programs learning from each other and sending what he calls technical anthropologists out to the customer to really deeply understand the Gemba and how they're using software and getting the customer in week by week, every single week to test the software and give feedback to the team.So there's a whole set of practices that he had to create as the standard for the culture of Menlo. That it took an awful lot of work and it was much more than buying a lot of post it notes. [00:32:08] Joe Krebs: Yes, and maybe that was one of the reasons why he decided when we all agreed on we're going to meet in Ann Arbor and it was in September 23 we'll all come together and it was Mike it was you and several others and Richard Sheridan was the first one who says and count me in and I'm offering my office space for this because it's so important.[00:32:25] Jeffrey Liker: So yeah, well he yeah so he didn't understand Kata at first but then I understood it. And he said, yeah, that's pretty much what we do. Isn't it? Then he had to, you know, he then added some things to what they do because they weren't working in a deliberate way using the scientific approach of kind, but the overarching way that they worked and developed all their software was very much the vision, current state, right down to small pieces, solve one problem after another with very quick feedback.Correct feedback, then get it to work. One of the interesting things about Menlo innovations projects, cause they're developing customer software, none of it's off the shelf. And they, if they do a one year project in the 52nd week, the only thing they have to get right is one week of work because 51 weeks of work works perfectly.So there's basically zero, almost zero rework and they have a hundred percent customer satisfaction. The customer takes the software out of the box and they just start using it.. [00:33:37] Joe Krebs: I want to thank you Jeff for some insights on the second edition of the Toyota Way. If the second edition is It's only somewhat successful as the first edition in terms of sales of books.Thousands of books will be sold and thousands of readers out there will be exposed to scientific thinking and it's a good thing through your materials. I want to thank you for that. And also, yeah, just like, to everybody out there, if you are interested go to the show pages, I'm going to list Jeff's books and obviously ways of learning about Kata in a way to apply that in the agile context, I have some additional pointers here of where to go, what to do first and second, and obviously the Kata Bookshelf is growing thanks to you, Jeff, too, and and many more ways to learn about scientific thinking.Thank you, Jeff. [00:34:27] Jeffrey Liker: You're welcome. It's my pleasure, Joe. Take care then.

Interplace
Frankenstein Reimagined: Bioelectricity and the Quest for Life Beyond Mechanism

Interplace

Play Episode Listen Later Feb 4, 2024 13:39


Hello Interactors,A Frankenstein announcement from Musk this week punctuated my recent fascination with the author of that popular novel, Mary Shelley. Her isolated lived experience in a time of intense technological discovery, social and geo-political unrest, AND a climate crisis rings true today more than ever.But she also was subtlety representing a scientific movement that is largely ignored today, but just may be experiencing a bit of a resurgence in areas like biology and neuroscience.Let's dig in…FRANKEN-MUSK“It was already one in the morning; the rain pattered dismally against the panes, and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.”Mary Shelley was intrigued, and maybe a little scared, by the idea of electrifying organs. She admits as much in her 1831 forward of her famous novel, “Frankenstein”, first published January 1, 1818. She wrote,"Perhaps a corpse would be re-animated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth."Bioelectrical experimentation had been happening for nearly 40 years by the time Shelley wrote this book. Luigi Galvani, an Italian physician, physicist, and philosopher demonstrated the existence of electricity in living tissue in the late 1780s. He called it ‘animal electricity'. Many repeated his experiments over the years and ‘galvanism' remained hotly debated well into the 1800s.I've been thinking a lot about Shelley and her “Frankenstein” lately. The hype and hysteria surrounding AI, human-like robots, and biocomputing make it easy to imagine. Just last week Elon Musk tweeted that his company, Neuralink, implanted its brain chip in a human for the first time. He wants to make ‘The Matrix' a reality. Here we are some 200 years later, wanting to believe ‘perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth.'‘Vital warmth' seems a borrowed phrase from another scientific movement of the time, ‘vitalism'. Vitalism is the belief that living organisms are fundamentally different from non-living entities, like computer chips, because they are governed by a unique, non-physical force or "vital spark" that animates life. A kind of teleology for which some contemporary biologists now have empirical evidence.One prominent vitalist of the 18th and 19th century, the German physician, physiologist, and anthropologist, Johann Friedrich Blumenbach, is best known for his contributions to the study of human biology. He developed the concept of the "Bildungstrieb" or "formative drive," which he proposed as an inherent force guiding the growth and development of organisms. Contemporary science explains these processes through a combination of genetic, biochemical, and physical principles like encoded DNA, gene expression networks, and morphogenesis — the interactions between cells and their responses to various chemical and mechanical forces.THE INDUSTRIALIST'S VITAL SPARK‘Formative drive' was a vitalist response to the mechanistic explanations of life that were prevalent in the Enlightenment period. The same mechanistic fervor that endues so many technologists today, like Musk, with vital warmth. Blumenbach argued that physical and chemical processes alone could not account for the organization and complexity of living beings. Instead, he suggested that some other vital force was responsible for the development and function of organic forms.Vitalists had their skeptics. Chiefly among them was Alessandro Volta. He was critical of Galvani's ‘vital spark'. In Galvani's frog leg experiments, he discovered that when two different metals (e.g., copper and zinc) were connected and then touched to a frog's nerve and muscle, the muscle would contract even without any external electrical source. Galvani concluded that this was due to an electrical force inherent in the nerves of the frog, a concept that challenged the prevailing views of the time and eventually laid the groundwork for the field of electrophysiology.Volta, however, believed the electrical effects were due to the metals used in Galvani's experiments. Volta's work eventually led to the development of the Voltaic Pile, an early form of a battery. Hence the term ‘volt'. The Voltaic Pile enabled a more systematic and controlled study of electricity, which was a relatively little-understood phenomenon at the time. It provided scientists and inventors with a consistent and reliable source of electrical energy for experiments, leading to a deeper understanding of electrical principles and the discovery of new technologies.One such technology was the invention of the telegraph in the 1830s. The availability of electric batteries as power sources is what made it possible for Samuel Morse to revolutionize long-distance communication, profoundly effecting commerce, governance, and daily life. As he wrote in his first public demonstration, “What hath God wrought?”The mechanists gained further favor as more and more scientists, inventors, and eventually economists succumbed to the allure of reductionism. They believed understanding complex phenomena could be done by studying their simplest, most fundamental, and mechanistic parts. Including body parts.ECHOES OF THE INDUSTRIAL AGEIt was around the time of Morse's tinkering that Mary Shelley reissued ‘Frankenstein'. She revealed in her 1831 forward how she was influenced by the scientific and philosophical ideas of the late 18th and early 19th centuries. This included galvanism, the debates around vitalism, and the Romantic movement's reaction to the Enlightenment's emphasis on reason and science.This was also a period marked by significant political, social, and technological upheavals. The consolidation of nation-states and the expansion of political power were central themes of this era, leading to debates over government intervention and the balance between order and liberty. Shelley's narrative, set against this backdrop, can be seen as a reflection on the consequences of unchecked ambition and the ethical responsibilities of creators, themes that are increasingly relevant in today's discussions about artificial intelligence, genetic engineering, and other forms of technological innovation.Moreover, Shelley's personal history and the socio-political context of her time deeply informed the themes of her novel. As the daughter of Mary Wollstonecraft, a pioneering feminist thinker, Shelley was exposed from an early age to, what were then, radical ideas about gender, society, and individual rights. Her own experiences of loss, isolation, and vulnerability were compounded by the societal upheavals of the Little Ice Age and the dawn of the Industrial Revolution. "Frankenstein" is imbued with a profound sense of existential questioning. It critiques the dehumanizing aspects of technological and industrial progress — themes that resonate with many today.Like the early parts of the Industrial Revolution, we are living in a period of transforming economies, social structures, and daily life, ushering in new forms of labor, consumption, and environmental impact. The creation of Shelley's ‘Creature' can be seen as a metaphor for the unforeseen consequences of industrialization, including the alienation of individuals from their labor, from nature, and from each other.Shelley's narrative warns of the dangers of valuing power and progress over empathy and ethical consideration, a warning that remains pertinent as society grapples with the implications of rapid technological advancement and environmental degradation. Mechanistic reductionism, with its emphasis on dissecting complex phenomena into their most basic parts, undeniably continues to dominate much of science, technology, and conventional thought.Mary Shelley's "Frankenstein," while serving as a cautionary tale about the hubris and potential perils of unchecked scientific and technological ambition, has paradoxically also fueled the collective imagination, inspiring generations to dream of creating a human-like entity from disparate parts and mechanisms.Yet, there is an emerging renaissance that harks back to the holistic perspectives reminiscent of early vitalism. As scientists increasingly traverse interdisciplinary boundaries, embracing the principles of holism and complexity science, they are uncovering new patterns, principles, and laws that echo the intuitions of early vitalists.The groundbreaking research of Michael Levin at Tufts University, with its focus on bioelectric patterns and their role in development and regeneration, offers a compelling empirical bridge to Blumenbach's ‘formative drive'. While Levin's work eschews the metaphysical aspects of a "life force," it uncovers the intricate bioelectric networks that guide the form and function of organisms, echoing vitalism's fascination with the organizing principles of life.This shift acknowledges that life's essence may not be fully captured by reductionist views alone. Levin shows how it's not the mechanisms of DNA that unlock the mysteries of biological organization but the communication between cells and their environment. It points towards a more integrated understanding of the natural world that respects the intricate interplay of its myriad components.Shelley's pondering remains relevant today, “perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth." Either way, "Frankenstein" continues to remind us of the need for humility and ethical consideration. After all, as we navigate the complex frontier between mechanistic ambition and our fragile, emergent, and interconnected life neurobiology tells us our own neural connections are being reshaped by both environmental interactions and cognitive activity, reflecting principles of embedded cognition those early vitalists would surely endorse. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io

Dementia Matters
Do Non-Medicinal Interventions Reduce Dementia Risk? Studies are Inconclusive, Here's Why

Dementia Matters

Play Episode Listen Later Jan 9, 2024 34:34


In 2017, the National Academies of Sciences, Engineering, and Medicine published a report showing promising but inconclusive evidence suggesting that interventions like cognitive training, blood pressure control and increased physical activity reduce a person's risk for dementia, but what does the research show now? Dr. Luke Stoeckel from the National Institute on Aging (NIA) joins the podcast to share where the research on lifestyle interventions is at, why studies on these interventions are difficult to complete and more. Guest: Luke Stoeckel, PhD, program director, Mechanistic and Translational Decision Science Program, Division of Behavioral and Social Research (DBSR), NIA Show Notes Learn about the National Academies of Sciences, Engineering, and Medicine on their website. Read the 2017 National Academies' report, “Preventing Cognitive Decline and Dementia,” for free on their website. Learn about the National Academies' Research Priorities for Preventing and Treating Alzheimer's Disease and Related Dementias and find upcoming workshops on their website. Find information on the 2021 National Academies workshop mentioned by Dr. Chin at 1:36 on the National Academies website. Learn more about Dr. Stoeckel in his bio on the National Institute on Aging website. Connect with Us Find transcripts and more at our website. Email Dementia Matters: dementiamatters@medicine.wisc.edu Follow us on Facebook and Twitter. Subscribe to the Wisconsin Alzheimer's Disease Research Center's e-newsletter. Enjoy Dementia Matters? Consider making a gift to the Dementia Matters fund through the UW Initiative to End Alzheimer's. All donations go toward outreach and production.

Metamodern Spirituality
41. Debating the Place of the Human in Cosmology (w/ Matt Segall)

Metamodern Spirituality

Play Episode Listen Later Dec 22, 2023 90:19


Matt Segall joins me to debate the relative merits of "anthroposophical" aproaches to addressing the meaning crisis, such as those adopted in the works of Steiner, Stein and Gafni, and, to some degree, Whitehead. 0:00 Introduction 4:15 Revisiting the Image of Nature: Revitalizing Romanticism? 12:00 Humans in a Cosmos or a Cosmos Known by Humans? 21:17 Is Mechanism Just a Part of the Process? Emergence All the Way Down 30:05 Advance or Regression? Thinking in Terms of Assimilation and Accomodation 38:13 Defining a "Mechanistic" Approach: The Minimal Need for Causality 41:17 Retrojecting Novelty into Primals? Positing "Prehension" 57:01 Upshot: So...Is the Universe Expanding or Not? 1:11:52 Against a Model--or ...Models? 1:17:13 What Does an "Emodied" Knowledge Entail? 1:25:25 Conclusion

The Nonlinear Library
LW - How useful is mechanistic interpretability? by ryan greenblatt

The Nonlinear Library

Play Episode Listen Later Dec 1, 2023 38:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How useful is mechanistic interpretability?, published by ryan greenblatt on December 1, 2023 on LessWrong. Opening positions I'm somewhat skeptical about mech interp (bottom-up or substantial reverse engineering style interp): Current work seems very far from being useful (it isn't currently useful) or explaining much what's going on inside of models in key cases. But it's hard to be very confident that a new field won't work! And things can be far from useful, but become useful via slowly becoming more powerful, etc. In particular, current work fails to explain much of the performance of models which makes me think that it's quite far from ambitious success and likely also usefulness. I think this even after seeing recent results like dictionary learning results (though results along these lines were a positive update for me overall). There isn't a story which-makes-much-sense-and-seems-that-plausible-to-me for how mech interp allows for strongly solving core problems like auditing for deception or being able to supervise superhuman models which carry out actions we don't understand (e.g. ELK). That said, all things considered, mech interp seems like a reasonable bet to put some resources in. I'm excited about various mech interp projects which either: Aim to more directly measure and iterate on key metrics of usefulness for mech interp Try to use mech interp to do something useful and compare to other methods (I'm fine with substantial mech interp industrial policy, but we do actually care about the final comparison. By industrial policy, I mean subsidizing current work even if mech interp isn't competitve yet because it seems promising.) I'm excited about two main outcomes from this dialogue: Figuring out whether or not we agree on the core claims I wrote above. (Either get consensus or find crux ideally) Figuring out which projects we'd be excited about which would substantially positively update us about mech interp. Maybe another question which is interesting: even if mech interp isn't that good for safety, maybe it's pretty close to stuff which is great and is good practice. Another outcome that I'm interested in is personally figuring out how to better articulate and communicate various takes around mech interp. By mech interp I mean "A subfield of interpretability that uses bottom-up or reverse engineering approaches, generally by corresponding low-level components such as circuits or neurons to components of human-understandable algorithms and then working upward to build an overall understanding." I feel pretty on board with this definition, Our arguments here do in fact have immediate implications for your research, and the research of your scholars, implying that you should prioritize projects of the following forms: Doing immediately useful stuff with mech interp (and probably non-mech interp), to get us closer to model-internals-based techniques adding value. This would improve the health of the field, because it's much better for a field to be able to evaluate work in simple ways. Work which tries to establish the core ambitious hopes for mech interp, rather than work which scales up mediocre-quality results to be more complicated or on bigger models. What I want from this dialogue: Mostly an excuse to form more coherent takes on why mech interp matters, limitations, priorities, etc I'd be excited if this results in us identifying concrete cruxes I'd be even more excited if we identify concrete projects that could help illuminate these cruxes (especially things I could give to my new army of MATS scholars!) I'd be even more excited if we identify concrete projects that could help illuminate these cruxes (especially things I could give to my new army of MATS scholars!) I'd like to explicitly note I'm excited to find great concrete projects! Stream of ...

Bitcoin Dad Pod
Episode 113: Wet Hot Financial Surveillance

Bitcoin Dad Pod

Play Episode Listen Later Dec 1, 2023 45:18


Pre-Show Bad entropy will get your bitcoins stolen (https://nitter.net/mononautical/status/1728946778798793126?ref=nobsbitcoin.com) News This cycle's bitcoin environmental FUD is probably going to be water usage (https://www.theverge.com/2023/11/29/23979858/bitcoin-mine-cryptocurrency-water-use-analysis-energy) Shiny new FUD ammo takes media by storm! (https://stacker.news/items/333497) Rebuttal of BBCNews article, and the Underlying Study, on Bitcoin Water Usage – Batcoinz (https://batcoinz.com/2484-2/) Unchained surveyed US investors and sentiment is very bullish on bitcoin (https://unchained.com/blog/bullish-on-bitcoin-us-investor-outlook-for-btc-in-2024/) Economics What is Global R* (https://bankunderground.co.uk/2023/11/30/global-r/)? And why is it still falling? A Mechanistic model based view of society, humans, and economics is wrong It's not good enough to centrally manage human existence Privacy The US Treasury wants to expand warrantless mass surveillance and sanction free speach via software (https://www.coincenter.org/in-an-effort-to-close-perceived-loopholes-treasury-recommends-massive-expansion-of-warrantless-surveillance-and-power-to-sanction-open-source-software/) in order to strengthen its ability to sanction Bitcoin Education Bitcoin Optech covers liquidity advertisements (https://bitcoinops.org/en/newsletters/2023/11/29/), a feature of dual-funded channels on lightning offered by Core lightning from Blockstream both parties contribute a utxo to a lightning channel liquidity on both sides of the channel at opening Feedback Remember to get in touch bitcoindadpod@protonmail.com or @bitcoindadpod (https://mobile.twitter.com/bitcoindadpod) on twitter Consider joining the matrix channel (https://matrix.to/#/#bitcoin:jupiterbroadcasting.com) using a matrix client like element (https://element.io/get-started), details here (https://www.jupiterbroadcasting.com/community/matrix/) Bitcoin Dad Pod - Pay with you Fav App (https://fountain.fm/show/iSDKQzR5V5DCLn2AuUnI) I sent a boost via Fountain.fm's website Scanned the QR code with my Strike app No Alby or new Podcasting app required! Thank you Boosters If you get some value from this show, please consider sending a boost. Hearing from you means a lot to us! Send a Boost via the Podcast Index web page. No Podcast app upgrade required. Install Alby (https://getalby.com/) Find the Bitcoin Dad Pod on the Podcast Index (https://podcastindex.org/podcast/5049889) Boost right from the page! Send a re-ocurring or one-off lightning boost to the show with no message at bdadpod@getalby.com or directly to Chris at chrislas@getalby.com Value for Value Podcasting 2.0 to support an indepenent podcasting ecosystem (https://podcastindex.org/) Recomended Podcasting2.0 apps: Fountain (https://www.fountain.fm/) podcast app (Android) Podverse (https://podverse.fm/) (Cross platform and self hostable) + Alby (https://getalby.com/) for boosts Castamatic (https://apps.apple.com/us/app/castamatic-podcast-player/id966632553) (Apple) Sponsors and Acknowledgements Music by Lesfm from Pixabay Self Hosted Show (https://selfhosted.show/) courtesy of Jupiter Broadcasting (https://www.jupiterbroadcasting.com/)

The Human Risk Podcast
Dr Jake Mazulewicz on Human Reliability

The Human Risk Podcast

Play Episode Listen Later Nov 27, 2023 71:29


What do you think of when you hear the word 'error'? It's highly likely you'll think of it negatively as a defect. The obvious way to manage defects, particularly in safety-critical industries, is to have detailed procedures, strict compliance, and zero tolerance for errors. But we know that this doesn't always work. After all, if it did, we'd have far fewer errors.My guest on this episode takes a different approach. He specializes in helping organizations, particularly safety-critical ones where the cost of getting something wrong can be death or injury, to reduce errors, improve safety and build trust. He calls this human reliability. His name is Dr. Jake Mazulewicz, and he's been a firefighter, an emergency medical technician and a military paratrooper.Nowadays, he brings all of those experiences to bear in helping organizations design processes and cultures that allow humans to manage the complexities that don't always allow themselves to be neatly codified into standard procedures. As you'll hear, he's got some fascinating ideas about designing safety models that flex to meet the situation's needs. Key Moments In The Show (mins:secs)02:14 — Dr. Jake's background05:25 — Mechanistic vs. Adaptive systems06:28 — The big problem: too many leaders treat ALL systems as Mechanistic systems09:10 — What to say to a commercial pilot when you're walking off their aircraft after the flight lands10:40 — Four work guidance modes11:00 — 1) Procedures11:53 — 2) Guidelines like, “To find out what an organization values, follow the money."13:00 — 3) Principles like “A jury doesn't decide what to believe. A jury decides who to believe."21:20 — 4) Tacit Knowledge — You can solve complex problems, but can't explain how26:40 — “All models are wrong, but some of them are useful.”31:10 — How one team of electricians dramatically improved safety by using FEWER procedures35:57 — Letter of the law vs. spirit of the law38:20 — Have you heard of Philip K. Howard?... "Let's pretend I haven't.”42:10 — We write rules when we don't have enough trust44:55 — Build trust by overcommunicating your intentions45:25 — “Commander's Intent” in military mission plans47:55 — Listen for "Weak Signals" like hearing, “I'll do whatever it takes…"50:40 — Stay resilient by catching a system before it goes “exponential”54:00 — Chris Argyris' 17-word, 4-step recipe for creating a toxic work culture57:10— A new Early Warning System58:20 — Ask an expert, “What's a 'Weak Signal' in your field, and what does it mean?”1:04:55 — Why a non-punitive approach is so helpful and so uncommon1:10:10 — How to get in touch with Dr. Jake — reliableorg.comFurther InformationTo find out more about Praxtical Human Performance For Leaders visit www.reliableorg.com LinkedIn: https://www.linkedin.com/in/jake-mazulewicz/

A Cut Above: Cardiothoracic insights from EACTS
Trial highlights from EACTS Annual Meeting 2023

A Cut Above: Cardiothoracic insights from EACTS

Play Episode Listen Later Nov 27, 2023 33:36


Join our hosts Can Gollmann-Tepeköylü and Miia L Lehtinen for their late-breaking trials highlights presented at the 37th EACTS Annual Meeting in Vienna, including:  (2:40 - 8:53) Subannular Repair for Secondary Mitral Regurgitation with restricted systolic leaflet Motion (type IIIb): 2-year results from REFORM-MR registry by Dr Jonas Pausch   (8.53 - 13:47) Mechanistic insights from the CAST-HF trial: Transcriptomic analysis of human myocardial biopsies reveal biglycan as crucial mediator of angiogenesis by Dr Johannes Holfeld   (13.47 - 20:55) First-in-human experience with a novel custom-made double branch stent graft for aortic arch repair – The NEXUS DUO by Prof Mario Louis Lachat & Dr P Buyl   (20:56 - 25:28) Real World Experience of 510 On-X Aortic Valve Replacement Patients Treated with Low Dose Warfarin by Prof Aungye OO   (25:28 - 32:1) CEASE-AF: Effectiveness and safety of hybrid epicardial and endocardial ablation in patients with persistent and longstanding persistent atrial fibrillation by Dr Piotr Suwalski.

Metaphysical Romp 2 Podcast
2023-36: Science Delusions Debunked, Part 4; Absolute Word-Transitions

Metaphysical Romp 2 Podcast

Play Episode Listen Later Nov 24, 2023 20:22


MetaSpiritual Podcast- Episode 4of a series based on the 10 science delusions from a book by Rupert Sheldrake called The Science Delusion. In this episode, we visit: Minds are inside our brain, Our memories are wiped out at death, Psi abilities like telepathy are illusory, and Mechanistic medicine is the only kind that works. The post 2023-36: Science Delusions Debunked, Part 4; Absolute Word-Transitions first appeared on Metaphysical Romp 2 Podcast.

The Nonlinear Library
LW - Neel Nanda on the Mechanistic Interpretability Researcher Mindset by Michaël Trazzi

The Nonlinear Library

Play Episode Listen Later Sep 22, 2023 4:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neel Nanda on the Mechanistic Interpretability Researcher Mindset, published by Michaël Trazzi on September 22, 2023 on LessWrong. Some excerpts from my interview with Neel Nanda about how to productively carry out research in mechanistic interpretability. Posting this here since I believe his advice is relevant for building accurate world models in general. An Informal Definition Of Mechanistic Interpretability It's kind of this weird flavor of AI interpretability that says, "Bold hypothesis. Despite the entire edifice of established wisdom and machine learning, saying that these models are bullshit, inscrutable black boxes, I'm going to assume there is some actual structure here. But the structure is not there because the model wants to be interpretable or because it wants to be nice to me. The structure is there because the model learns an algorithm, and the algorithms that are most natural to express in the model's structure and its particular architecture and stack of linear algebra are algorithms that make sense to humans. (context) Three Modes Of Mechanistic Interpretability Research: Confirming, Red Teaming And Gaining Surface Area I kind of feel a lot of my research style is dominated by this deep seated conviction that models are comprehensible and that everything is fundamentally kind of obvious and that I should be able to just go inside the model and there should be this internal structure. And so one mode of research is I just have all of these hypotheses and guesses about what's going on. I generate experiment ideas for things that should be true if my hypothesis is true. And I just repeatedly try to confirm it. Another mode of research is trying to red team and break things, where I have this hypothesis, I do this experiment, I'm like, "oh my God, this is going so well", and then get kind of stressed because I'm concerned that I'm having wishful thinking and I try to break it and falsify it and come up with experiments that would show that actually life is complicated. A third mode of research is what I call "trying to gain surface area" where I just have a system that I'm pretty confused about. I just don't really know where to get started. Often, I'll just go and do things that I think will get me more information. Just go and plot stuff or follow random things I'm curious about in a fairly undirected fuzzy way. This mode of research has actually been the most productive for me. [...] You could paraphrase them as, "Isn't it really obvious what's going on?", "Oh man, am I so sure about this?" and "Fuck around and find out". (context) Strong Beliefs Weakly Held: Having Hypotheses But Being Willing To Be Surprised You can kind of think of it as "strong beliefs weakly held". I think you should be good enough that you can start to form hypotheses, being at the point where you can sit down, set a five minute timer and brainstorm what's going on and come up with four different hypotheses is just a much, much stronger research position than when you sit down and try to brainstorm and you come up with nothing. Yeah, maybe having two hypotheses is the best one. You want to have multiple hypotheses in mind. You also want to be aware that probably both of them are wrong, but you want to have enough engagement with the problem that you can generate experiment ideas. Maybe one way to phrase it is if you don't have any idea what's going on, it's hard to notice what's surprising. And often noticing what's surprising is one of the most productive things you can do when doing research. (context) On The Benefits Of The Experimental Approach I think there is a strong trend among people, especially the kind of people who get drawn to alignment from very theory based arguments to go and just pure theory craft and play around with toy models and form beautiful, elegant hy...

The Nonlinear Library
AF - Barriers to Mechanistic Interpretability for AGI Safety by Connor Leahy

The Nonlinear Library

Play Episode Listen Later Aug 29, 2023 1:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Barriers to Mechanistic Interpretability for AGI Safety, published by Connor Leahy on August 29, 2023 on The AI Alignment Forum. I gave a talk at MIT in March earlier this year on barriers to mechanistic interpretability being helpful to AGI/ASI safety, and why by default it will likely be net dangerous. Several people seem to be coming to similar conclusions recently (e.g., this recent post). I discuss two major points (by no means exhaustive), one technical and one political, that present barriers to MI addressing AGI risk: AGI cognition is interactive. AGI systems interact with their environment, learn online and will externalize massive parts of their cognition into the environment. If you want to reason about such a system, you also need a model of the environment. Worse still, AGI cognition is reflective, and you will also need a model of cognition/learning. (Most) MI will lead to capabilities, not oversight. Institutions are not set up and do not have the incentives to resist using capabilities gains and submit to monitoring and control. This being said, there are more nuances to this opinion, and a lot of it is downstream of lack of coordination and the downsides of publishing in an adversarial environment like we are in right now. I still endorse the work done by e.g. Chris Olah's team as brilliant, but extremely early, scientific work that has a lot of steep epistemological hurdles to overcome, but I unfortunately also believe that on net work such as Olah's is at the moment more useful as a safety-washing tool for AGI labs like Anthropic than actually making a dent on existential risk concerns. Here are the slides from my talk, and you can find the video here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - An interactive introduction to grokking and mechanistic interpretability by Adam Pearce

The Nonlinear Library

Play Episode Listen Later Aug 7, 2023 2:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An interactive introduction to grokking and mechanistic interpretability, published by Adam Pearce on August 7, 2023 on The AI Alignment Forum. Our write up largely agrees with @Quintin Pope's summary, with the addition of training trajectory visualizations and an explanation of the MLP construction that solves modular addition. A meta note that didn't make it into the article - with so many people looking into this problem over the last 18 months, I'm surprised this construction took so long to find. The modular addition task with a 1-layer MLP is about as simple as you can get! Scaling mechanistic interpretability up to more complex tasks/models seems worth continuing to try, but I'm less sure extracting crisp explanations will be possible. Even if we "solve" superposition, figuring the construction here - where there's no superposition in the generalizing model - wasn't trivial. gif/twitter summary If we train a MLP to solve modular addition, the generalizing phase has suggestive periodic patterns. To figure out why the model generalizes, we first look at task where we know the generalizing solution - sparse parity. You can see the model generalizing as weight decay prunes spurious connections. One point from the Omnigrok paper I hadn't internalized before training lots of models: grokking only happens when hyper-parameters are just right. We can make other weird things happen too, like AdamW oscillating between low train loss and low weights. To understand how a MLP solves modular addition, we train a much smaller model with a circular input embedding baked in. Following @Neel Nanda and applying a discrete Fourier transform, we see larger models trained from scratch use the same star trick! Finally, we show what the stars are doing and prove that they work: Our ReLU activation has a small error, but it's close enough to the exact solution - an x² activation suggested in Grokking modular arithmetic - for the model to patch everything up w/ constructive interference. And there are still open question: why are the frequencies with >5 neurons lopsided? Why does factoring Winput not do that same thing as factoring Woutput? Also see The Hydra Effect: Emergent Self-repair in Language Model Computations Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

E48: Mechanizing Mechanistic Interpretability with Arthur Conmy

Play Episode Listen Later Jul 27, 2023 136:40


Arthur Conmy sits down with Nathan Labenz for an accessible deep dive into the state of interpretability research online today. They discuss how pioneering researchers have painstakingly worked to isolate the sub-circuits within transformers that are responsible for different aspects of AI capabilities. Arthur also introduces us to a new ACDC approach that he and his co-authors have taken to automating some of the most time-consuming parts of this work. If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics The Cognitive Revolution is a part of the Turpentine podcast network. Learn more: Turpentine.co TIMESTAMPS: (00:00) Episode Preview (04:40) What attracted Arthur to mechanistic interpretability? (07:49) LLM information processing: General Understanding vs Stochastic Parrot Paradigm (14:00) ACDC paper: https://arxiv.org/abs/2304.14997  (14:45) Sponsors: NetSuite | Omneky (24:30) Putting together data sets  (32:39) How to intervene in LLMs network activity (36:00) Defining metrics to evaluate the production of correct completions (44:20) The future of the mechanistic interpretability research  (50:00) Extracting upstream activations in the ACDC project and evaluating impact on downstream components. (56:00) Anthropic research findings  (01:08:00) 3-Step process of the ACDC approach (01:22:00) Setting a threshold and validation (01:27:00) Goal of the approach (01:32:00) Compute requirements Correction: at (01:33:00), Arthur meant to say = "quadratic in nodes" (01:35:30) Scaling laws for mechanistic interpretability (01:40:00) Accessibility of this research for casual enthusiasts (01:46:00) Emergence discourse (01:56:00) Path to AI safety LINKS: https://arthurconmy.github.io/ https://arxiv.org/abs/2304.14997  X: @labenz (Nathan) @arthurconmy (Arthur) @cogrev_podcast SPONSORS: NetSuite | Omneky -NetSuite provides financial software for all your business needs. More than thirty-six thousand companies have already upgraded to NetSuite, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform: NetSuite (http://netsuite.com/cognitive) and defer payments of a FULL NetSuite implementation for six months. -Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that *actually work* customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

From Our Neurons to Yours
Exercise and the brain

From Our Neurons to Yours

Play Episode Listen Later Jun 22, 2023 22:29 Transcription Available


We all know exercise has all sorts of benefits beyond just making us stronger and fitter. It lowers and inflammation. It buffers stress and anxiety. It clarifies our thinking. In fact, regular exercise is one of the few things we know with reasonable confidence can help extend our healthy lifespan. But for all the evidence of the benefits of exercise, it's a bit surprising that we don't know more about how exercise does all these great things for our bodies and our brains.Today's guest, Jonathan Long, recently discovered a new molecule produced when we exercise a compound called Lac-Phe. Lac-Phe appears to be linked to a number of health benefits from regulating appetite to boosting learning and memory. Long is a chemist by training — and an institute scholar of Sarafan ChEM-H, the Institute for Chemistry Engineering and Medicine for Human Health, our sister institute here at Stanford. So I started our conversation by asking him how his background as a chemist informs how he thinks about studying exercise and human health.NOTE: Thanks to everyone who's tuned in to our first season! We're going to take a break for the summer to get ready for next season, but we'll have more tales from the frontiers of brain science for you in the fall. Learn MoreOrganism-wide, cell-type-specific secretome mapping of exercise training in mice (Cell Metabolism, 2023)Understanding how different cell types respond to exercise could be key step toward exercise as medicine  (Wu Tsai Human performance Alliance, 2023)An exercise-inducible metabolite that suppresses feeding and obesity (Nature, 2022)‘Anti-hunger' molecule forms after exercise, scientists discover (Stanford Medicine)Why Does a Hard Workout Make You Less Hungry? (New York Times)An exercise molecule? (American Society for Biochemistry and Molecular Biology blog)Mechanistic dissection and therapeutic capture of an exercise-inducible metabolite signaling pathway for brain resilience (Innovation Award from the Knight Initiative for Brain Resilience at the Wu Tsai Neurosciences Institute)Episode CreditsThis episode was produced by Michael Osborne, with production assistance by Morgan Honaker, and hosted by Nicholas Weiler. Art by Aimee Garza.Thanks for listening! Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

Machine Learning Street Talk
Neel Nanda - Mechanistic Interpretability

Machine Learning Street Talk

Play Episode Listen Later Jun 18, 2023 250:00


In this wide-ranging conversation, Tim Scarfe interviews Neel Nanda, a researcher at DeepMind working on mechanistic interpretability, which aims to understand the algorithms and representations learned by machine learning models. Neel discusses how models can represent their thoughts using motifs, circuits, and linear directional features which are often communicated via a "residual stream", an information highway models use to pass information between layers. Neel argues that "superposition", the ability for models to represent more features than they have neurons, is one of the biggest open problems in interpretability. This is because superposition thwarts our ability to understand models by decomposing them into individual units of analysis. Despite this, Neel remains optimistic that ambitious interpretability is possible, citing examples like his work reverse engineering how models do modular addition. However, Neel notes we must start small, build rigorous foundations, and not assume our theoretical frameworks perfectly match reality. The conversation turns to whether models can have goals or agency, with Neel arguing they likely can based on heuristics like models executing long term plans towards some objective. However, we currently lack techniques to build models with specific goals, meaning any goals would likely be learned or emergent. Neel highlights how induction heads, circuits models use to track long range dependencies, seem crucial for phenomena like in-context learning to emerge. On the existential risks from AI, Neel believes we should avoid overly confident claims that models will or will not be dangerous, as we do not understand them enough to make confident theoretical assertions. However, models could pose risks through being misused, having undesirable emergent properties, or being imperfectly aligned. Neel argues we must pursue rigorous empirical work to better understand and ensure model safety, avoid "philosophizing" about definitions of intelligence, and focus on ensuring researchers have standards for what it means to decide a system is "safe" before deploying it. Overall, a thoughtful conversation on one of the most important issues of our time. Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk Neel Nanda: https://www.neelnanda.io/ TOC [00:00:00] Introduction and Neel Nanda's Interests (walk and talk) [00:03:15] Mechanistic Interpretability: Reverse Engineering Neural Networks [00:13:23] Discord questions [00:21:16] Main interview kick-off in studio [00:49:26] Grokking and Sudden Generalization [00:53:18] The Debate on Systematicity and Compositionality [01:19:16] How do ML models represent their thoughts [01:25:51] Do Large Language Models Learn World Models? [01:53:36] Superposition and Interference in Language Models [02:43:15] Transformers discussion [02:49:49] Emergence and In-Context Learning [03:20:02] Superintelligence/XRisk discussion Transcript: https://docs.google.com/document/d/1FK1OepdJMrqpFK-_1Q3LQN6QLyLBvBwWW_5z8WrS1RI/edit?usp=sharing Refs: https://docs.google.com/document/d/115dAroX0PzSduKr5F1V4CWggYcqIoSXYBhcxYktCnqY/edit?usp=sharing

Brain Inspired
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Brain Inspired

Play Episode Listen Later Jun 2, 2023 114:42


Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives? Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives. This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion! AWARE: Glimpses of Consciousness Umbrella Films 0:00 - Intro 19:42 - Mechanistic reductionism 45:33 - Changing views during lifetime 53:49 - Did making the film alter your views? 57:49 - ChatGPT 1:04:20 - Materialist assumption 1:11:00 - Science of consciousness 1:20:49 - Transhumanism 1:32:01 - Integrity 1:36:19 - Aesthetics 1:39:50 - Response to the film

Get Healthy Alabama
23-20 Want To REALLY Get Healthy? Adopt A Vitalistic Approach To Health

Get Healthy Alabama

Play Episode Listen Later May 17, 2023 21:19


Episode 23:20 Want To REALLY Get Healthy? Adopt A Vitalistic Approach To Health Everyone wants to be healthier. The problem is most people don't know HOW. That is, most people don't know what they need to do in order to live a more vibrant and healthier life. Part of the problem is that we live in a society where the health care system follows a “mechanistic” approach… an approach that looks at the human body as a “machine” made up of “parts.” If something goes wrong with one of the “parts” you are told to visit a medical specialist who will treat it. While this “mechanistic” approach certainly has some value, it falls far short when it comes to helping people overcome chronic health conditions such as headaches, neck and back pain, heart disease, cancer, diabetes, skin diseases and hormonal imbalances. Those conditions - conditions that account for the overwhelming majority of doctor visits - are better served by a “Vitalistic” approach to health. What's a Vitalistic approach to health? It's an approach that looks at the body as MORE than just the sum of its parts. That is, instead of viewing the various organs, glands and systems of the body as isolated parts, a Vitalistic approach views them all as being part of a team. A symphony. Unlike a Mechanistic approach, a Vitalistic approach to health believes that what happens to one organ, gland or system will also have an affect on other organs, glands and systems throughout the body. In addition, a Vitalistic approach to health believes that there is a life-force to the human body… a form of energy that WANTS to help us heal and be healthy. On this episode we talk about this Vitalistic approach to health and how adopting it for yourself can help you get to the ROOT cause of your health issues. Please give this episode a good listen and then share it with a friend. Thanks! ———————- Want to learn more? Continue the conversation regarding this episode, and all future episodes, by signing up for our daily emails. Simply visit: GetHealthyAlabama.com  Once there, download the “Symptom Survey” and you will automatically added to our email list. ———————- Also, if you haven't already, we'd appreciate it if you'd subscribe to the podcast, leave a comment and give us a rating. (Thanks!!!) 

On Facebook? Connect with us at Facebook.com/GetHealthyAlabama

           * This podcast is for informational and educational purposes only. It is not intended to diagnose or treat any disease. Please consult with your health care provider before making any health-related changes.