POPULARITY
The wonderful human being that is Chirag Patel is this years DOGx 2025 Keynote Speaker so it was only a matter of time before we got together to chat! What resulted was one of our favourite ever episodes and we couldn't be any more excited for this years conference. Click the link below NOW to register your interest to know when the tickets go on sale at an exclusive early bird rate and make bloomin' sure you botty is on one of those seats!Register your interest for DOGX 2025 now!https://www.pact-dogs.com/dogx2025#register-your-interestThis was a jam packed show where we left no stone unturned. We explore Chirag's early experiences with the dog named Kane that sparked it all, how some wonderful early mentorship opportunities have shaped his views on giving back to the training community, delve into how we us trainers can foster creativity and the importance of observation. We also discuss mindfulness, how engaging fully with clients can lead to wonderful outcomes and where we should be putting our efforts in terms of inspiring the next wave of awesome animal trainers. We celebrate the realities of where positive training has got us and talk through the potential pitfalls of any movement that doesn't follow the evidence. Chirag also muses on how societal changes have evolved our training styles and talks about being a proud Radical Behaviourist. No chat with Chirag would be complete without a good dose of Skinner and Chirag talks passionately about Skinner's philosophies including areas he wrote about that many haven't explored. On top of all that we also chew the fat over how social media can distort perceptions of animal training practices, How the very nature of scientific inquiry requires a level of uncertainty to flourish AND the importance of balancing technical knowledge with practical application when it comes to dog training.As an added “Bookshelver bonus” we've also captured Nat's LIVE reaction to seeing Benson Boones performance at the Grammys (spoiler…she enjoyed it) Honestly this was one of our favourite shows EVER and I'm sure you'll enjoy it from start to end! Now make like a Benson, flip off a piano and get this episode in you ear-holes... WOOF!
Challenging assumptions about learning, performance, and the rise of AI. Nick Shackleton-Jones returns to The Learning Hack to challenge assumptions about learning theory, discuss his Affective Context Model, and reflect on the future of workplace learning. From TikTok as a learning platform to the risks of deceptive AI, this thought-provoking conversation will inspire and provoke in equal measure. Prepare for fresh insights and bold perspectives from one of learning's great minds. 00:00:00 - Start 00:01:09 - Intro 00:03:40 - TikTok: the ultimate learning platform? 00:09:28 - What has Nick been doing in the last 5 years? 00:22:43 - Is learning ROI based on magical thinking 00:27:37 - How has his thinking changed since he first wrote ‘How People Learn'? 00:36:26 - Have digital media become too poisonous for learning? 00:42:37 - Has the move to performance support really happened in L&D? 00:47:05 - AI: Is Nick a P-Doomer? 01:09:41 - Who does he follow in learning? 01:13:03 - End Contact John Helmer LinkedIn: linkedin.com/in/johnhelmer X: @johnhelmer Bluesky: @johnhelmer.bsky.social Website: learninghackpodcast.com
You may have heard of singular learning theory, and its "local learning coefficient", or LLC - but have you heard of the refined LLC? In this episode, I chat with Jesse Hoogland about his work on SLT, and using the refined LLC to find a new circuit in language models. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast The transcript: https://axrp.net/episode/2024/11/27/38_2-jesse-hoogland-singular-learning-theory.html FAR.AI: https://far.ai/ FAR.AI on X (aka Twitter): https://x.com/farairesearch FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch The Alignment Workshop: https://www.alignment-workshop.com/ Topics we discuss, and timestamps: 00:34 - About Jesse 01:49 - The Alignment Workshop 02:31 - About Timaeus 05:25 - SLT that isn't developmental interpretability 10:41 - The refined local learning coefficient 14:06 - Finding the multigram circuit Links: Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient: https://arxiv.org/abs/2410.02984 Investigating the learning coefficient of modular addition: hackathon project: https://www.lesswrong.com/posts/4v3hMuKfsGatLXPgt/investigating-the-learning-coefficient-of-modular-addition Episode art by Hamish Doodles: hamishdoodles.com
Send Us Your QuestionsBarn Bonus 4 - What is Positive Punishment?Welcome to our fourth bonus episode of "Dangerous at Both Ends, Tricky in the Middle"! In this special "Barn Bonus" series, we dive into key behavioural terms that are crucial for understanding equine behaviour and training.In today's episode, we explore the concept of positive punishment. Positive punishment involves adding an aversive stimulus to decrease the likelihood of an unwanted behaviour. We'll break down what it means, how it works, and why it's a significant concept in horse training.Join us for a concise and informative discussion that will enhance your understanding of horse behaviour and improve your training techniques. Perfect for both new and experienced horse enthusiasts looking to deepen their knowledge.Tune in and let's get to the core of positive punishment in this quick yet insightful episode!If you have any questions or comments, feel free to reach out to Barbara and Jen at the links below. We'd love to hear from you! Meet Your Hosts Barbara Hardman (Bright Horse Equiation)www.brighthorse.ie
Send Us Your QuestionsBarn Bonus 3 - What is Negative Punishment?Welcome to our third bonus episode of "Dangerous at Both Ends, Tricky in the Middle"! In this special "Barn Bonus" series, we dive into key behavioural terms that are crucial for understanding equine behaviour and training.In today's episode, we explore the concept of negative punishment. Negative punishment involves removing something desirable to decrease the likelihood of unwanted behaviours. We'll break down what it means, how it works, and why it's an important concept in effective horse training.Join us for a concise and informative discussion that will enhance your understanding of horse behaviour and improve your training techniques. Perfect for both new and experienced horse enthusiasts looking to deepen their knowledge.Tune in and let's get to the core of negative punishment in this quick yet insightful episode!If you have any questions or comments, feel free to reach out to Barbara and Jen at the links below. We'd love to hear from you! Meet Your Hosts Barbara Hardman (Bright Horse Equiation)www.brighthorse.ie
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Singular learning theory: exercises, published by Zach Furman on August 30, 2024 on LessWrong. Thanks to Jesse Hoogland and George Wang for feedback on these exercises. In learning singular learning theory (SLT), I found it was often much easier to understand by working through examples, rather than try to work through the (fairly technical) theorems in their full generality. These exercises are an attempt to collect the sorts of examples that I worked through to understand SLT. Before doing these exercises, you should have read the Distilling Singular Learning Theory (DSLT) sequence, watched the SLT summit YouTube videos, or studied something equivalent. DSLT is a good reference to keep open while solving these problems, perhaps alongside Watanabe's textbook, the Gray Book. Note that some of these exercises cover the basics, which are well-covered in the above distillations, but some deliver material which will likely be new to you (because it's buried deep in a textbook, because it's only found in adjacent literature, etc). Exercises are presented mostly in conceptual order: later exercises freely use concepts developed in earlier exercises. Starred (*) exercises are what I consider the most essential exercises, and the ones I recommend you complete first. 1. *The normal distribution, like most classical statistical models, is a regular (i.e. non-singular[1]) statistical model. A univariate normal model with unit variance and mean μR is given by the probability density p(x|μ)=12πexp(12(xμ)2). Assume the true distribution q(x) of the data is realizable by the model: that is, q(x)=p(x|μ0) for some true parameter μ0. 1. Calculate the Fisher information matrix of this model (note that since we have only a single parameter, the FIM will be a 1x1 matrix). Use this to show the model is regular. 2. Write an explicit expression for the KL divergence K(μ) between q(x) and p(x|μ), as a function of the parameter μ. This quantity is sometimes also called the population loss. [See Example 1.1, Gray Book, for the case of a 2D normal distribution] 3. Using K(μ) from b), give an explicit formula for the volume of "almost optimal" parameters, V(ϵ)={μK(μ)
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Singular learning theory: exercises, published by Zach Furman on August 30, 2024 on LessWrong. Thanks to Jesse Hoogland and George Wang for feedback on these exercises. In learning singular learning theory (SLT), I found it was often much easier to understand by working through examples, rather than try to work through the (fairly technical) theorems in their full generality. These exercises are an attempt to collect the sorts of examples that I worked through to understand SLT. Before doing these exercises, you should have read the Distilling Singular Learning Theory (DSLT) sequence, watched the SLT summit YouTube videos, or studied something equivalent. DSLT is a good reference to keep open while solving these problems, perhaps alongside Watanabe's textbook, the Gray Book. Note that some of these exercises cover the basics, which are well-covered in the above distillations, but some deliver material which will likely be new to you (because it's buried deep in a textbook, because it's only found in adjacent literature, etc). Exercises are presented mostly in conceptual order: later exercises freely use concepts developed in earlier exercises. Starred (*) exercises are what I consider the most essential exercises, and the ones I recommend you complete first. 1. *The normal distribution, like most classical statistical models, is a regular (i.e. non-singular[1]) statistical model. A univariate normal model with unit variance and mean μR is given by the probability density p(x|μ)=12πexp(12(xμ)2). Assume the true distribution q(x) of the data is realizable by the model: that is, q(x)=p(x|μ0) for some true parameter μ0. 1. Calculate the Fisher information matrix of this model (note that since we have only a single parameter, the FIM will be a 1x1 matrix). Use this to show the model is regular. 2. Write an explicit expression for the KL divergence K(μ) between q(x) and p(x|μ), as a function of the parameter μ. This quantity is sometimes also called the population loss. [See Example 1.1, Gray Book, for the case of a 2D normal distribution] 3. Using K(μ) from b), give an explicit formula for the volume of "almost optimal" parameters, V(ϵ)={μK(μ)
Education News Headline Roundup [00:08:10]The Free Application for Federal Student Aid is once again majorly delayed. On August 7th the U.S. Department of Education announced a rollout process for the 2025-2026 form that includes an October 1st date for limited testing, with the application set to open to all students on December 1 2024, two months later than the typical release date for the application. A federal appeals court has allowed an Iowa law that bans books with sexual content from K-12 school libraries and restricts instruction on sexual orientation and gender identity before seventh grade to take effect. This overturns a previous injunction that had paused the law, signed by Republican Governor Kim Reynolds in 2023.An update to a previously discussed story: in the wake of former Nebraska Senator Ben Sasse announcing his resignation from the University of Florida presidency, the UF student newspaper, the Independent Florida Alligator, has reported that Sasse may have been forced out over escalating tensions with the university's board chairman, Morteza “Mori” Hosseini.Social Learning Theory: Bandura, Bobo, and Beyond [00:15:16]Social Learning Theory (SLT) seeks to explain how we learn behaviors by observing and imitating others. This episode explores SLT's unique position between behaviorism, which focuses on observable behaviors, and cognitive psychology, which emphasizes internal processes like memory and perception.We'll discuss how Albert Bandura revolutionized psychology by developing new theories on aggression and modeled behaviors, challenging the dominant behaviorist views of the time. We'll cover Bandura's famous Bobo Doll experiment and its groundbreaking findings on observational learning, and we'll also introduce you to other key figures in the development of SLT, like Julian Rotter, who developed the concept of locus of control, and Walter Mischel, known for the marshmallow test on delayed gratification. We'll also tease apart the core concepts of SLT (modeling, self-efficacy, and vicarious reinforcement) to show how they work together to shape behavior. Finally, we'll discuss the broader applications and criticisms of SLT in areas like education, media, and even advertising, where the power of observed behavior is leveraged in both positive and controversial ways.Sources & Resources:The rollout for the updated FAFSA application has been delayed again : NPRAfter Botched Rollout, FAFSA Is Delayed for a Second Year - The New York TimesFAFSA Rollout Delayed Again: Here's What to Know | Paying for College | U.S. NewsU.S. Department of Education Announces Schedule and New Process to Launch 2025-26 FAFSA Form‘There's nothing more important right now': Cardona commits to fixing FAFSA disaster - POLITICOFederal judges allow Iowa book ban to take effect this school year | AP NewsObama addresses healthcare website glitches - BBC NewsFederal appeals court rules Iowa's book ban law can take effectSasse's spending, exit leave lingering questions at UFUniversity of Florida Pres. Kent Fuchs addresses Sasse allegations, plans for futureSasse stepped down. Donors and top officials say he was forced out. - The Independent Florida AlligatorBen Sasse Appears to Have Turned the University of Florida Into a Gravy Train for His PalsFormer UF President Ben Sasse defends spending after Gov. DeSantis raises concernsSocial cognitive theory | psychology | BritannicaSocial learning | Secondary Keywords: Imitation, Observational Learning & Reinforcement | BritannicaObservational learning | Psychology, Behavior & Cognitive Processes | BritannicaSocial learning theory - WikipediaAlbert Bandura | Biography, Theory, Experiment, & Facts | BritannicaAlbert Bandura, Leading Psychologist of Aggression, Dies at 95 - The New York TimesSelf-efficacy: Toward a unifying theory of behavioral change - A. Bandura - APA PsycNetSocial learning and clinical psychology : Rotter, Julian B : Free Download, Borrow, and Streaming : Internet ArchiveJulian Rotter - WikipediaTheories of Emeritus Professor Julian Rotter Still Relevant to Field of Clinical Psychology - UConn TodayDecision Making Individual Differences Inventory - Internal-External ScaleIn Memoriam: Walter Mischel, Psychologist Who Developed Pioneering Marshmallow Test | Department of PsychologyWalter Mischel | Stanford Marshmallow Experiment, Cognitive Delay of Gratification | BritannicaHow many users visit Wikipedia daily? - Quora.The Bobo Doll Experiment - PsychestudyBiological Mechanisms for Observational Learning - PMCAlbert Bandura's experiments on aggression modeling in children: A psychoanalytic critique - PMCRemembrance For Walter Mischel, Psychologist Who Devised The Marshmallow Test
Send Us Your QuestionsBarn Bonus 2 - What is Positive Reinforcement?Welcome to our second bonus episode of "Dangerous at Both Ends, Tricky in the Middle"! In this special "Barn Bonus" series, we dive into key behavioural terms that are crucial for understanding equine behaviour and training.In today's episode, we explore the concept of positive reinforcement. Positive reinforcement is a powerful tool in behavioural psychology and horse training that encourages desired behaviours by offering rewards. We'll explain what it is, how it works, and why it's an effective technique for training your horse.Join us for a concise and informative discussion that will enhance your understanding of horse behaviour and improve your training techniques. Perfect for both new and experienced horse enthusiasts looking to deepen their knowledge.Tune in and let's uncover the benefits of positive reinforcement in this quick yet insightful episode!If you have any questions or comments, feel free to reach out to Barbara and Jen at the links below. We'd love to hear from you! Meet Your Hosts Barbara Hardman (Bright Horse Equiation)www.brighthorse.ie
Barn Bonus 1 - What is Negative Reinforcement?Welcome to our first bonus episode of "Dangerous at Both Ends, Tricky in the Middle"! In this special "Barn Bonus" series, we dive into key behavioural terms that are crucial for understanding equine behaviour and training.In today's episode, we explore the concept of negative reinforcement. Often misunderstood, negative reinforcement is a fundamental principle in behavioural psychology and horse training. We'll break down what it means, how it works, and why it's important in creating effective training routines.Join us for a concise and informative discussion that will enhance your understanding of horse behaviour and improve your training techniques. Perfect for both new and experienced horse enthusiasts looking to deepen their knowledge.Tune in and let's get to the core of negative reinforcement in this quick yet insightful episode!If you have any questions or comments, feel free to reach out to Barbara and Jen at the links below. We'd love to hear from you! Meet Your Hosts Barbara Hardman (Bright Horse Equiation)www.brighthorse.ie
In this episode, I take a look at the models and theories espoused by Vygotsky and how to apply them to sessions in the woods. The Bracken Outdoors Podcast is designed for Woodland Leaders from bushcraft instructors to Forest School practitioners, helping you build a life in the great outdoors. With weekly short episodes on all aspects of life as a freelance Woodland Leader, from business tips and advice to philosophy of outdoor education, as well as monthly deep dives into larger topics or interviews with inspirational professionals and leaders in the outdoor education space.To find out more about my mission to help people Belong Outside, head to https://brackenoutdoors.com/Free Resources: + How to choose a tarp guide+ Forest School Activity Ideas PDF + The complete guide to setting your rates as an outdoor leader
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue introduction to Singular Learning Theory, published by Olli Järviniemi on July 8, 2024 on LessWrong. Alice: A lot of people are talking about Singular Learning Theory. Do you know what it is? Bob: I do. (pause) Kind of. Alice: Well, I don't. Explanation time? Bob: Uh, I'm not really an expert on it. You know, there's a lot of materials out there that Alice: that I realistically won't ever actually look at. Or, I've looked at them a little, but I still have basically no idea what's going on. Maybe if I watched a dozen hours of introductory lectures I'd start to understand it, but that's not currently happening. What I really want is a short overview of what's going on. That's self-contained. And easy to follow. Aimed at a non-expert. And which perfectly answers any questions I might have. So, I thought I'd ask you! Bob: Sorry, I'm actually really not Alice: Pleeeease? [pause] Bob: Ah, fine, I'll try. So, you might have heard of ML models being hard to interpret. Singular Learning Theory (SLT) is an approach for understanding models better. Or, that's one motivation, at least. Alice: And how's this different from a trillion other approaches to understanding AI? Bob: A core perspective of SLT is studying how the model develops during training. Contrast this to, say, mechanistic interpretability, which mostly looks at the fully trained model. SLT is also more concerned about higher level properties. As a half-baked analogue, you can imagine two approaches to studying how humans work: You could just open up a human and see what's inside. Or, you could notice that, hey, you have these babies, which grow up into children, go through puberty, et cetera, what's up with that? What are the different stages of development? Where do babies come from? And SLT is more like the second approach. Alice: This makes sense as a strategy, but I strongly suspect you don't currently know what an LLM's puberty looks like. Bob: (laughs) No, not yet. Alice: So what do you actually have? Bob: The SLT people have some quite solid theory, and some empirical work building on top of that. Maybe I'll start from the theory, and then cover some of the empirical work. Alice: (nods) I. Theoretical foundations Bob: So, as you know, nowadays the big models are trained with gradient descent. As you also know, there's more to AI than gradient descent. And for a moment we'll be looking at the Bayesian setting, not gradient descent. Alice: Elaborate on "Bayesian setting"? Bob: Imagine a standard deep learning setup, where you want your neural network to classify images, predict text or whatever. You want to find parameters for your network so that it has good performance. What do you do? The gradient descent approach is: Randomly initialize the parameters, then slightly tweak them on training examples in the direction of better performance. After a while your model is probably decent. The Bayesian approach is: Consider all possible settings of the parameters. Assign some prior to them. For each model, check how well they predict the correct labels on some training examples. Perform a Bayesian update on the prior. Then sample a model from the posterior. With lots of data you will probably obtain a decent model. Alice: Wait, isn't the Bayesian approach very expensive computationally? Bob: Totally! Or, if your network has 7 parameters, you can pull it off. If it has 7 billion, then no. There are way too many models, we can't do the updating, not even approximately. Nevertheless, we'll look at the Bayesian setting - it's theoretically much cleaner and easier to analyze. So forget about computational costs for a moment. Alice: Will the theoretical results also apply to gradient descent and real ML models, or be completely detached from practice? Bob: (winks) Alice: You know what, maybe I'll just let you t...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue introduction to Singular Learning Theory, published by Olli Järviniemi on July 8, 2024 on LessWrong. Alice: A lot of people are talking about Singular Learning Theory. Do you know what it is? Bob: I do. (pause) Kind of. Alice: Well, I don't. Explanation time? Bob: Uh, I'm not really an expert on it. You know, there's a lot of materials out there that Alice: that I realistically won't ever actually look at. Or, I've looked at them a little, but I still have basically no idea what's going on. Maybe if I watched a dozen hours of introductory lectures I'd start to understand it, but that's not currently happening. What I really want is a short overview of what's going on. That's self-contained. And easy to follow. Aimed at a non-expert. And which perfectly answers any questions I might have. So, I thought I'd ask you! Bob: Sorry, I'm actually really not Alice: Pleeeease? [pause] Bob: Ah, fine, I'll try. So, you might have heard of ML models being hard to interpret. Singular Learning Theory (SLT) is an approach for understanding models better. Or, that's one motivation, at least. Alice: And how's this different from a trillion other approaches to understanding AI? Bob: A core perspective of SLT is studying how the model develops during training. Contrast this to, say, mechanistic interpretability, which mostly looks at the fully trained model. SLT is also more concerned about higher level properties. As a half-baked analogue, you can imagine two approaches to studying how humans work: You could just open up a human and see what's inside. Or, you could notice that, hey, you have these babies, which grow up into children, go through puberty, et cetera, what's up with that? What are the different stages of development? Where do babies come from? And SLT is more like the second approach. Alice: This makes sense as a strategy, but I strongly suspect you don't currently know what an LLM's puberty looks like. Bob: (laughs) No, not yet. Alice: So what do you actually have? Bob: The SLT people have some quite solid theory, and some empirical work building on top of that. Maybe I'll start from the theory, and then cover some of the empirical work. Alice: (nods) I. Theoretical foundations Bob: So, as you know, nowadays the big models are trained with gradient descent. As you also know, there's more to AI than gradient descent. And for a moment we'll be looking at the Bayesian setting, not gradient descent. Alice: Elaborate on "Bayesian setting"? Bob: Imagine a standard deep learning setup, where you want your neural network to classify images, predict text or whatever. You want to find parameters for your network so that it has good performance. What do you do? The gradient descent approach is: Randomly initialize the parameters, then slightly tweak them on training examples in the direction of better performance. After a while your model is probably decent. The Bayesian approach is: Consider all possible settings of the parameters. Assign some prior to them. For each model, check how well they predict the correct labels on some training examples. Perform a Bayesian update on the prior. Then sample a model from the posterior. With lots of data you will probably obtain a decent model. Alice: Wait, isn't the Bayesian approach very expensive computationally? Bob: Totally! Or, if your network has 7 parameters, you can pull it off. If it has 7 billion, then no. There are way too many models, we can't do the updating, not even approximately. Nevertheless, we'll look at the Bayesian setting - it's theoretically much cleaner and easier to analyze. So forget about computational costs for a moment. Alice: Will the theoretical results also apply to gradient descent and real ML models, or be completely detached from practice? Bob: (winks) Alice: You know what, maybe I'll just let you t...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 31 - Singular Learning Theory with Daniel Murfet, published by DanielFilan on May 7, 2024 on The AI Alignment Forum. What's going on with deep learning? What sorts of models get learned, and what do the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Topics we discuss: What is singular learning theory? Phase transitions Estimating the local learning coefficient Singular learning theory and generalization Singular learning theory vs other deep learning theory How singular learning theory hit AI alignment Payoffs of singular learning theory for AI alignment Does singular learning theory advance AI capabilities? Open problems in singular learning theory for AI alignment What is the singular fluctuation? How geometry relates to information Following Daniel Murfet's work In this transcript, to improve readability, first names are omitted from speaker tags. Filan: Hello, everybody. In this episode, I'll be speaking with Daniel Murfet, a researcher at the University of Melbourne studying singular learning theory. For links to what we're discussing, you can check the description of this episode and you can read the transcripts at axrp.net. All right, well, welcome to AXRP. Murfet: Yeah, thanks a lot. What is singular learning theory? Filan: Cool. So I guess we're going to be talking about singular learning theory a lot during this podcast. So, what is singular learning theory? Murfet: Singular learning theory is a subject in mathematics. You could think of it as a mathematical theory of Bayesian statistics that's sufficiently general with sufficiently weak hypotheses to actually say non-trivial things about neural networks, which has been a problem for some approaches that you might call classical statistical learning theory. This is a subject that's been developed by a Japanese mathematician, Sumio Watanabe, and his students and collaborators over the last 20 years. And we have been looking at it for three or four years now and trying to see what it can say about deep learning in the first instance and, more recently, alignment. Filan: Sure. So what's the difference between singular learning theory and classical statistical learning theory that makes it more relevant to deep learning? Murfet: The "singular" in singular learning theory refers to a certain property of the class of models. In statistical learning theory, you typically have several mathematical objects involved. One would be a space of parameters, and then for each parameter you have a probability distribution, the model, over some other space, and you have a true distribution, which you're attempting to model with that pair of parameters and models. And in regular statistical learning theory, you have some important hypotheses. Those hypotheses are, firstly, that the map from parameters to models is injective, and secondly (quite similarly, but a little bit distinct technically) is that if you vary the parameter infinitesimally, the probability distribution it parameterizes also changes. This is technically the non-degeneracy of the Fisher information metric. But together these two conditions basically say that changing the parameter changes the distribution changes the model. And so those two conditions together are in many of the major theorems that you'll see when you learn statistics, things like the Cramér-Rao bound, many other things; asymptotic normality, which describes the fact that as you take more samples, your model tends to concentrate in a way that looks like a Gaussian distribution around the most likely parameter. So these are sort of basic ingredients in understandi...
What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:26 - What is singular learning theory? 0:16:00 - Phase transitions 0:35:12 - Estimating the local learning coefficient 0:44:37 - Singular learning theory and generalization 1:00:39 - Singular learning theory vs other deep learning theory 1:17:06 - How singular learning theory hit AI alignment 1:33:12 - Payoffs of singular learning theory for AI alignment 1:59:36 - Does singular learning theory advance AI capabilities? 2:13:02 - Open problems in singular learning theory for AI alignment 2:20:53 - What is the singular fluctuation? 2:25:33 - How geometry relates to information 2:30:13 - Following Daniel Murfet's work The transcript: https://axrp.net/episode/2024/05/07/episode-31-singular-learning-theory-dan-murfet.html Daniel Murfet's twitter/X account: https://twitter.com/danielmurfet Developmental interpretability website: https://devinterp.com Developmental interpretability YouTube channel: https://www.youtube.com/@Devinterp Main research discussed in this episode: - Developmental Landscape of In-Context Learning: https://arxiv.org/abs/2402.02364 - Estimating the Local Learning Coefficient at Scale: https://arxiv.org/abs/2402.03698 - Simple versus Short: Higher-order degeneracy and error-correction: https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1 Other links: - Algebraic Geometry and Statistical Learning Theory (the grey book): https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A - Mathematical Theory of Bayesian Statistics (the green book): https://www.routledge.com/Mathematical-Theory-of-Bayesian-Statistics/Watanabe/p/book/9780367734817 In-context learning and induction heads: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html - Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity: https://arxiv.org/abs/2106.15933 - A mathematical theory of semantic development in deep neural networks: https://www.pnas.org/doi/abs/10.1073/pnas.1820226116 - Consideration on the Learning Efficiency Of Multiple-Layered Neural Networks with Linear Units: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404877 - Neural Tangent Kernel: Convergence and Generalization in Neural Networks: https://arxiv.org/abs/1806.07572 - The Interpolating Information Criterion for Overparameterized Models: https://arxiv.org/abs/2307.07785 - Feature Learning in Infinite-Width Neural Networks: https://arxiv.org/abs/2011.14522 - A central AI alignment problem: capabilities generalization, and the sharp left turn: https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization - Quantifying degeneracy in singular models via the learning coefficient: https://arxiv.org/abs/2308.12108 Episode art by Hamish Doodles: hamishdoodles.com
Demonstrating how learning theories are applicable to a variety of real-world contexts, The Librarian's Guide to Learning Theory: Practical Applications in Library Settings (ALA Editions, 2023) will help library workers better understand how people learn so that they can improve support for instruction on their campuses and in their communities. In this book, Ann Medaille illustrates how libraries support learning in numerous ways, from makerspaces to book clubs, from media facilities to group study spaces, from special events to book displays. Medaille unchains the field of learning theory from its verbose and dense underpinnings to show how libraries can use its concepts and principles to better serve the needs of their users. Through 14 chapters organized around learning topics, including motivation, self-regulation, collaboration, and inquiry, readers will explore succinct overviews of major learning theories drawn from the fields of psychology, education, philosophy, and anthropology, among others. All of these can support reflection on concrete ways to improve library instruction, spaces, services, resources, and technologies. This accessible handbook includes teaching librarian's tips, reflection questions, and suggestions for further reading at the end of each chapter. Jen Hoyer is Technical Services and Electronic Resources Librarian at CUNY New York City College of Technology. Jen edits for Partnership Journal and organizes with the TPS Collective. She is co-author of What Primary Sources Teach: Lessons for Every Classroom and The Social Movement Archive. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Demonstrating how learning theories are applicable to a variety of real-world contexts, The Librarian's Guide to Learning Theory: Practical Applications in Library Settings (ALA Editions, 2023) will help library workers better understand how people learn so that they can improve support for instruction on their campuses and in their communities. In this book, Ann Medaille illustrates how libraries support learning in numerous ways, from makerspaces to book clubs, from media facilities to group study spaces, from special events to book displays. Medaille unchains the field of learning theory from its verbose and dense underpinnings to show how libraries can use its concepts and principles to better serve the needs of their users. Through 14 chapters organized around learning topics, including motivation, self-regulation, collaboration, and inquiry, readers will explore succinct overviews of major learning theories drawn from the fields of psychology, education, philosophy, and anthropology, among others. All of these can support reflection on concrete ways to improve library instruction, spaces, services, resources, and technologies. This accessible handbook includes teaching librarian's tips, reflection questions, and suggestions for further reading at the end of each chapter. Jen Hoyer is Technical Services and Electronic Resources Librarian at CUNY New York City College of Technology. Jen edits for Partnership Journal and organizes with the TPS Collective. She is co-author of What Primary Sources Teach: Lessons for Every Classroom and The Social Movement Archive. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/education
Demonstrating how learning theories are applicable to a variety of real-world contexts, The Librarian's Guide to Learning Theory: Practical Applications in Library Settings (ALA Editions, 2023) will help library workers better understand how people learn so that they can improve support for instruction on their campuses and in their communities. In this book, Ann Medaille illustrates how libraries support learning in numerous ways, from makerspaces to book clubs, from media facilities to group study spaces, from special events to book displays. Medaille unchains the field of learning theory from its verbose and dense underpinnings to show how libraries can use its concepts and principles to better serve the needs of their users. Through 14 chapters organized around learning topics, including motivation, self-regulation, collaboration, and inquiry, readers will explore succinct overviews of major learning theories drawn from the fields of psychology, education, philosophy, and anthropology, among others. All of these can support reflection on concrete ways to improve library instruction, spaces, services, resources, and technologies. This accessible handbook includes teaching librarian's tips, reflection questions, and suggestions for further reading at the end of each chapter. Jen Hoyer is Technical Services and Electronic Resources Librarian at CUNY New York City College of Technology. Jen edits for Partnership Journal and organizes with the TPS Collective. She is co-author of What Primary Sources Teach: Lessons for Every Classroom and The Social Movement Archive. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Criticism of Singular Learning Theory, published by Joar Skalse on November 19, 2023 on The AI Alignment Forum. In this post, I will briefly give my criticism of Singular Learning Theory (SLT), and explain why I am skeptical of its significance. I will especially focus on the question of generalisation --- I do not believe that SLT offers any explanation of generalisation in neural networks. I will also briefly mention some of my other criticisms of SLT, describe some alternative solutions to the problems that SLT aims to tackle, and describe some related research problems which I would be more excited about. (I have been meaning to write this for almost 6 months now, since I attended the SLT workshop last June, but things have kept coming in the way.) For an overview of SLT, see this sequence. This post will also refer to the results described in this post, and will also occasionally touch on VC theory. However, I have tried to make it mostly self-contained. The Mystery of Generalisation First of all, what is the mystery of generalisation? The issue is this; neural networks are highly expressive, and typically overparameterised. In particular, when a real-world neural network is trained on a real-world dataset, it is typically the case that this network is able to express many functions which would fit the training data well, but which would generalise poorly. Moreover, among all functions which do fit the training data, there are more functions (by number) that generalise poorly, than functions that generalise well. And yet neural networks will typically find functions that generalise well. To make this point more intuitive, suppose we have a 500,000-degree polynomial, and that we fit this to 50,000 data points. In this case, we have 450,000 degrees of freedom, and we should by default expect to end up with a function which generalises very poorly. But when we train a neural network with 500,000 parameters on 50,000 MNIST images, we end up with a neural network that generalises well. Moreover, adding more parameters to the neural network will typically make generalisation better, whereas adding more parameters to the polynomial is likely to make generalisation worse. A simple hypothesis might be that some of the parameters in a neural network are redundant, so that even if it has 500,000 parameters, the dimensionality of the space of all functions which it can express is still less than 500,000. This is true. However, the magnitude of this effect is too small to solve the puzzle. If you get the MNIST training set, and assign random labels to the test data, and then try to fit the network to this function, you will find that this often can be done. This means that while neural networks have redundant parameters, they are still able to express more functions which generalise poorly, than functions which generalise well. Hence the puzzle. The anwer to this puzzle must be that neural networks have an inductive bias towards low-complexity functions. That is, among all functions which fit a given training set, neural networks are more likely to find a low-complexity function (and such functions are more likely to generalise well, as per Occam's Razor). The next question is where this inductive bias comes from, and how it works. Understanding this would let us better understand and predict the behaviour of neural networks, which would be very useful for AI alignment. I should also mention that generalisation only is mysterious when we have an amount of training data that is small relative to the overall expressivity of the learning machine. Classical statistical learning theory already tells us that any sufficiently well-behaved learning machine will generalise well in the limit of infinite training data. For an overview of these results, see this post. Thus, the quest...
In this episode, I am joined by professional dog trainer Brenda Aloff. She is a nationally and internationally recognized speaker, clinician and author. She'll be answering a student's question about how to keep horses and dogs safe around each other. Brenda is also an equestrian so she is the perfect person to ask! If you have dogs and horses you'll really enjoy this conversation.About the Guest:Brenda Aloff is a nationally and internationally recognized speaker, clinician and author. At clinics, she is known for straight-forward talk, a passion to make the daily lives of dogs better through promoting better relationships between dogs and their owners and a fun-loving sense of humor. Bringing 27+ years of practical and “in the trenches” experience to the table, Brenda works primarily with problem dogs and performance and working dogs.Brenda also has a variety of online classes, short courses and video educational opportunities on the website at www.brendaaloff.com. The topics are as varied as dog training itself, including Canine Body Language, raising puppies and training puppies in the whelping nest for Breeders, Learning Theory, Performance and Working Dog training programs and protocols.Living with dogs is not always easy! Brenda knows the “problem dog” group well, having lived with several bad actors and delinquents herself (Terriers, Guarding Dogs). This allows her to bring an intimate knowledge of how to deal with dogs and what can be done to prevent problems from developing. Brenda has taught thousands of group classes, from puppy to advanced competition to re-socialization classes for reactive dogs.Brenda's area of specialization is problem behavior in canines. A large percentage of her practice consists of dogs that are referred when other training techniques have been exhausted or failed. A high percentage of clientele consists of dogs with aggression problems.Brenda has shown in a variety of Performance Dog Events, herself, and has provided consultations, clinics and lessons to at-the-top-of-the-game trainers in Obedience, Agility, Protection Work, SAR, Service Dog Organizations, and Police Working Dogs.About the Host:Karen Rohlf, author and creator of Dressage Naturally, is an internationally recognized clinician who is changing the equestrian educational paradigm. She teaches students of all disciplines and levels from around the world in her clinics and the Dressage Naturally virtual programs.Karen is well known for training horses with a priority on partnership, a student-empowering approach to teaching, and a positive and balanced point of view. She believes in getting to the heart of our mental, emotional, and physical partnership with our horses by bringing together the best of the worlds of dressage and partnership-based training. Karen's passion for teaching extends beyond horse training. Her For The Love Of The Horse: Transform Your Business Seminar and Mastermind/Mentorship programs are a result of her commitment to helping heart-centered equine professionals thrive so that horses may have a happier life in this industry.Resource Links:Brenda Aloff: www.brendaaloff.comThe Dressage Naturally VIDEO CLASSROOM: https://dnc.dressagenaturally.net/Take the Happy Athlete Quiz: https://inbound.dressagenaturally.net/happy-athlete-quiz-start/Leave a question for Karen to answer on the pod:
Hannah is joined by Author Andy Goldhawk to discuss his new book The Super Quick Guide to Learning Theories & Teaching Approaches. You can grab your copy from Sage Education here: The Super Quick Guide to Learning Theories and Teaching Approaches | SAGE Publications Ltd - you can get 25% off your purchase by using the code TTR25 on checkout.
The word "audiation" means “to think music.” For music teachers who incorporate audiation and its accompanied Music Learning Theory in their teaching, it is a way to help students deepen their musical understanding from the very beginning of training. Music Learning Theory is a comprehensive approach for musical learning, based on an extensive body of research and practical field testing by Edwin E. Gordon. In this episode, Christine discusses audiation and Music Learning Theory with pianist and music educator Siliana Chiliachka, who uses Music Learning Theory and Audiation in her own piano studio. TOPICS INLUDE: What is Audiation? What is Music Learning Theory? What is missing in modern music/piano instruction What a audiation-based piano lesson looks like How to present Music Learning Theory to parents and students How to learn more about the Music Learning Theory method For links to more information about Music Learning Theory and Audiation, visit our shownotes on our website: https://frostedlens.com/musicians-vs-the-world/f/audiation-and-music-learning-theory-with-siliana-chiliachka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jesse Hoogland on Developmental Interpretability and Singular Learning Theory, published by Michaël Trazzi on July 6, 2023 on LessWrong. Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety who has recently been publishing on LessWrong about how to apply Singular Learning Theory to Alignment, and even organized some workshop in Berkeley last week around this. I thought it made sense to interview him to have some high-level overview of Singular Learning Theory (and other more general approaches like developmental interpretability). Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. Interpreting Neural Networks: The Phase Transition View Studying Phase Transitions Could Help Detect Deception "We want to be able to know when these dangerous capabilities are first acquired because it might be too late. They might become sort of stuck and crystallized and hard to get rid of. And so we want to understand how dangerous capabilities, how misaligned values develop over the course of training. Phase transitions seem particularly relevant for that because they represent kind of the most important structural changes, the qualitative changes in the shape of these models internals. Now, beyond that, another reason we're interested in phase transitions is that phase transitions in physics are understood to be a kind of point of contact between the microscopic world and the macroscopic world. So it's a point where you have more control over the behavior of a system than you normally do. That seems relevant to us from a safety engineering perspective. Why do you have more control in a physical system during phase transitions?" (context) A Concrete Example of Phase Transition In Physics and an analogous example inside of neural networks "Jesse: If you heat a magnet to a high enough temperature, then it's no longer a magnet. It no longer has an overall magnetization. And so if you bring another magnet to it, they won't stick. But if you cool it down, at some point it reaches this Curie temperature. If you push it lower, then it will become magnetized. So the entire thing will all of a sudden get a direction. It'll have a north pole and a south pole. So the thing is though, like, which direction will that north pole or south pole be? And so it turns out that you only need an infinitesimally small perturbation to that system in order to point it in a certain direction. And so that's the kind of sensitivity you see, where the microscopic structure becomes very sensitive to tiny external perturbations. Michaël: And so if we bring this back to neural networks, if the weights are slightly different, the overall model could be deceptive or not. Is it something similar? Jesse: This is speculative. There are more concrete examples. So there are these toy models of superposition studied by Anthropic. And that's a case where you can see that it's learning some embedding and unembeddings. So it's trying to compress data. You can see that the way it compresses data involves this kind of symmetry breaking, this sensitivity, where it selects one solution at a phase transition. So that's a very concrete example of this." (context) Developmental Interpretability "Suppose it's possible to understand what's going on inside of neural networks, largely understand them. First assumption. Well then, it's still going to be very difficult to do that at one specific moment in time. I think intractable. The only way you're actually going to build up an exhaustive idea of what structure the model has internally, is to look at how it forms over the course of training. You want to look at each moment, where you learn sp...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jesse Hoogland on Developmental Interpretability and Singular Learning Theory, published by Michaël Trazzi on July 6, 2023 on The AI Alignment Forum. Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety who has recently been publishing on LessWrong about how to apply Singular Learning Theory to Alignment, and even organized some workshop in Berkeley last week around this. I thought it made sense to interview him to have some high-level overview of Singular Learning Theory (and other more general approaches like developmental interpretability). Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. Interpreting Neural Networks: The Phase Transition View Studying Phase Transitions Could Help Detect Deception "We want to be able to know when these dangerous capabilities are first acquired because it might be too late. They might become sort of stuck and crystallized and hard to get rid of. And so we want to understand how dangerous capabilities, how misaligned values develop over the course of training. Phase transitions seem particularly relevant for that because they represent kind of the most important structural changes, the qualitative changes in the shape of these models internals. Now, beyond that, another reason we're interested in phase transitions is that phase transitions in physics are understood to be a kind of point of contact between the microscopic world and the macroscopic world. So it's a point where you have more control over the behavior of a system than you normally do. That seems relevant to us from a safety engineering perspective. Why do you have more control in a physical system during phase transitions?" (context) A Concrete Example of Phase Transition In Physics and an analogous example inside of neural networks "Jesse: If you heat a magnet to a high enough temperature, then it's no longer a magnet. It no longer has an overall magnetization. And so if you bring another magnet to it, they won't stick. But if you cool it down, at some point it reaches this Curie temperature. If you push it lower, then it will become magnetized. So the entire thing will all of a sudden get a direction. It'll have a north pole and a south pole. So the thing is though, like, which direction will that north pole or south pole be? And so it turns out that you only need an infinitesimally small perturbation to that system in order to point it in a certain direction. And so that's the kind of sensitivity you see, where the microscopic structure becomes very sensitive to tiny external perturbations. Michaël: And so if we bring this back to neural networks, if the weights are slightly different, the overall model could be deceptive or not. Is it something similar? Jesse: This is speculative. There are more concrete examples. So there are these toy models of superposition studied by Anthropic. And that's a case where you can see that it's learning some embedding and unembeddings. So it's trying to compress data. You can see that the way it compresses data involves this kind of symmetry breaking, this sensitivity, where it selects one solution at a phase transition. So that's a very concrete example of this." (context) Developmental Interpretability "Suppose it's possible to understand what's going on inside of neural networks, largely understand them. First assumption. Well then, it's still going to be very difficult to do that at one specific moment in time. I think intractable. The only way you're actually going to build up an exhaustive idea of what structure the model has internally, is to look at how it forms over the course of training. You want to look at each moment, where...
Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety. More recently, Jesse has been thinking about Singular Learning Theory and Developmental Interpretability, which we discuss in this episode. Before he came to grips with existential risk from AI, he co-founded a health-tech startup automating bariatric surgery patient journeys. (00:00) Intro (03:57) Jesse's Story And Probability Of Doom (06:21) How Jesse Got Into Singular Learning Theory (08:50) Intuition behind SLT: the loss landscape (12:23) Does SLT actually predict anything? Phase Transitions (14:37) Why care about phase transition, grokking, etc (15:56) Detecting dangerous capabilities like deception in the (devel)opment (17:24) A concrete example: magnets (20:06) Why Jesse Is Bullish On Interpretability (23:57) Developmental Interpretability (28:06) What Happens Next? Jesse's Vision (31:56) Toy Models of Superposition (32:47) Singular Learning Theory Part 2 (36:22) Are Current Models Creative? Reasoning? (38:19) Building Bridges Between Alignment And Other Disciplines (41:08) Where To Learn More About Singular Learning Theory Make sure I upload regularly: https://patreon.com/theinsideview Youtube: https://youtu.be/713KyknwShA Transcript: https://theinsideview.ai/jesse Jesse: https://twitter.com/jesse_hoogland Host: https://twitter.com/MichaelTrazzi Patreon supporters: - Vincent Weisser - Gunnar Höglund - Ryan Coppolo - Edward Huff - Emil Wallner - Jesse Hoogland - William Freire - Cameron Holmes - Jacques Thibodeau - Max Chiswick - Jack Seroy - JJ Hepburn
If we had to pick just one thing as the key to the success of a learning business, we would argue for the learning experiences offered because those learning experiences are at the core of what learning businesses do. In order to offer effective learning experiences, we need a solid understanding of adult learning theory. Although many of learning business professionals are already familiar adult learning theory, there can be tremendous value in taking time to periodically revisit it. In this redux episode of the Leading Learning Podcast, co-hosts Celisa Steele and Jeff Cobb examine three principles of adult learning theory and the implications each has on designing, developing, and delivering meaningful learning experiences. Show notes and a downloadable transcript are available at https://www.leadinglearning.com/episode364.
In this episode, I talk about how and why Exposure and Response Prevention works. I discuss.. - an overview of Exposure and Response Prevention (ERP) - the Habituation model - the Inhibitory Learning model - enhancing the effectiveness of ERP - and so much more Head to my website at www.jennaoverbaughlpc.com to sign up for my free e-mail newsletter, grab your free "Imagine Your Recovered Life" PDF, and download your free “5 Must Know Strategies for Managing Anxiety and Intrusive Thoughts” video + access expertly crafted masterclasses just for you. Course and more coming soon! Remember: this podcast is for informational purposes only and may not be the best fit for you and your personal situation. It shall not be construed as mental health or medical advice. The information and education provided here is not intended or implied to supplement or replace professional advice of your own professional mental health or medical treatment, advice, and/or diagnosis. Always check with your own physician or medical or mental health professional before trying or implementing any information read here. Jenna Overbaugh, LPC
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My impression of singular learning theory, published by Ege Erdil on June 18, 2023 on LessWrong. Disclaimer: I'm by no means an expert on singular learning theory and what I present below is a simplification that experts might not endorse. Still, I think it might be more comprehensible for a general audience than going into digressions about blowing up singularities and birational invariants. Here is my current understanding of what singular learning theory is about in a simplified (though perhaps more realistic?) discrete setting. Suppose you represent a neural network architecture as a map A:2NF where 2={0,1}, 2N is the set of all possible parameters of A (seen as floating point numbers, say) and F is the set of all possible computable functions from the input and output space you're considering. In thermodynamic terms, we could identify elements of 2N as "microstates" and the corresponding functions that the NN architecture A maps them to as "macrostates". Furthermore, suppose that F comes together with a loss function L:FR evaluating how good or bad a particular function is. Assume you optimize L using something like stochastic gradient descent on the function L with a particular learning rate. Then, in general, we have the following results: SGD defines a Markov chain structure on the space 2N whose stationary distribution is proportional to e−βL(A(θ)) on parameters θ for some positive constant β>0 that depends on the learning rate. This is just a basic fact about the Langevin dynamics that SGD would induce in such a system. In general A is not injective, and we can define the "A-complexity" of any function f∈Im(A)⊂F as c(f)=Nlog2−log(|A−1(f)|). Then, the probability that we arrive at the macrostate f is going to be proportional to e−c(f)−βL(f). When L is some kind of negative log-likelihood, this approximates Solomonoff induction in a tempered Bayes paradigm - we raise likelihood ratios to a power β≠1 - insofar as the A-complexity c(f) is a good approximation for the Kolmogorov complexity of the function f, which will happen if the function approximator defined by A is sufficiently well-behaved. The intuition for why we would expect (3) to be true in practice has to do with the nature of the function approximator A. When c(f) is small, it probably means that we only need a small number of bits of information on top of the definition of A itself to define f, because "many" of the possible parameter values for A are implementing the function f. So f is probably a simple function. On the other hand, if f is a simple function and A is sufficiently flexible as a function approximator, we can probably implement the functionality of f using only a small number of the N bits in the codomain of A, which leaves us the rest of the bits to vary as we wish. This makes |A−1(f)| quite large, and by extension the complexity c(f) quite small. The vague concept of "flexibility" mentioned in the paragraph above requires A to have singularities of many effective dimensions, as this is just another way of saying that the image of A has to contain functions with a wide range of A-complexities. If A is a one-to-one function, this clean version of the theory no longer works, though if A is still "close" to being singular (for instance, because many of the functions in its image are very similar) then we can still recover results like the one I mentioned above. The basic insights remain the same in this setting. I'm wondering what singular learning theory experts have to say about this simplification of their theory. Is this explanation missing some important details that are visible in the full theory? Does the full theory make some predictions that this simplified story does not make? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonli...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Distilling Singular Learning Theory, published by Liam Carroll on June 16, 2023 on LessWrong. TLDR; In this sequence I distill Sumio Watanabe's Singular Learning Theory (SLT) by explaining the essence of its main theorem - Watanabe's Free Energy Formula for Singular Models - and illustrating its implications with intuition-building examples. I then show why neural networks are singular models, and demonstrate how SLT provides a framework for understanding phases and phase transitions in neural networks. Finally, I will outline a research agenda for applying the insights of SLT to AI alignment. Epistemic status: The core theorems of Singular Learning Theory have been rigorously proven and published by Sumio Watanabe across 20 years of research. Precisely what it says about modern deep learning, and its potential application to alignment, is still speculative. Acknowledgements: This sequence has been produced with the support of a grant from the Long Term Future Fund. I'd like to thank all of the people that have given me feedback on each post: Ben Gerraty, Jesse Hoogland, Matthew Farrugia-Roberts, Luke Thorburn, Rumi Salazar, Guillaume Corlouer, and in particular my supervisor and editor-in-chief Daniel Murfet. Theory vs Examples: The sequence is a mixture of synthesising the main theoretical results of SLT, and providing simple examples and animations that illustrate its key points. As such, some theory-based sections are slightly more technical. Some readers may wish to skip ahead to the intuitive examples and animations before diving into the theory - these are clearly marked in the table of contents of each post. Prerequisites: Anybody with a basic grasp of Bayesian statistics and multivariable calculus should have no problems understanding the key points. Importantly, despite SLT pointing out the relationship between algebraic geometry and statistical learning, no prior knowledge of algebraic geometry is required to understand this sequence - I will merely gesture at this relationship. Jesse Hoogland wrote an excellent introduction to SLT which serves as a high level overview of the ideas that I will discuss here, and is thus recommended pre-reading to this sequence. SLT Workshop: I have prepared the sequence with the Workshop on Singular Learning Theory and Alignment in mind (original announcement here). For those attending the virtual Primer from June 19th-24th 2023, this work serves as a useful companion piece. If you haven't signed up yet and find this sequence interesting, consider attending! Thesis: The sequence is derived from my recent masters thesis which you can read about at my website. Introduction Knowledge to be discovered [in a statistical model] corresponds to a singularity. If a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular. Sumio Watanabe In 2009, Sumio Watanabe wrote these two profound statements in his groundbreaking book Algebraic Geometry and Statistical Learning where he proved the first main results of Singular Learning Theory (SLT). Up to this point, this work has gone largely under-appreciated by the AI community, probably because it is rooted in highly technical algebraic geometry and distribution theory. On top of this, the theory is framed in the Bayesian setting, which contrasts the SGD-based setting of modern deep learning. But this is a crying shame, because SLT has a lot to say about why neural networks, which are singular models, are able to generalise well in the Bayesian setting, and it is very possible that these insights carry over to modern deep learning. At its core, SLT shows that the loss landscape of singular models, the KL divergence K(w), is fundamentally different to that of regular models like linear regression, consisting of flat valley...
By Adam Turteltaub Our colleagues expect to be treated like adults, and that should include the compliance training we assign them. CJ Wolf, a professor at Brigham Young University-Idaho and founder of Codermedschool.com, explains we need to embrace adult learning theory, which recognizes that adults learn differently than children. Making mistakes, for example, is particularly powerful. Good compliance training, consequently, should be less about telling them what they need to know and more about providing them with an opportunity to work through scenarios and make their errors in a safe classroom setting rather than out in the real world. He shares a host of similar good advice in this podcast and in the SCCE Creating Effective Compliance Training Workshop. Click below to hear other do's and don'ts to make your training more relevant: Do assess the effectiveness of the training. Be sure to include testing. Don't assess the effectiveness just once. See what employees remember several months later. Don't overload new employees on the first day. A lot of departments are throwing information at them. Be judicious in terms of what you expect them to tackle right away, and what can wait until later. Do have a training plan based on your organization's risk. Don't give everyone the same training. Tailor based on their needs. Want to know more? Think about joining him for the Creating Effective Compliance Training Workshop.
In this episode of the Brawn Body Health and Fitness Podcast - Dan is joined by Dr. Robert Butler to discuss the concepts of learning theory and acquiring movement strategy in athletes, with a focus on baseball. Butler is beginning his eighth season in Major League Baseball with the St. Louis Cardinals. As the Director of Performance, Dr. Butler oversees player physical development and prevention programs, nutrition, sports science, technology integration and mental health services for the St. Louis Cardinals organization as well as the medical care for the affiliate teams. Prior to joining the Cardinals, he served as an Assistant Professor and Associate Director of the Michael W. Krzyzewski Human Performance Laboratory at Duke University for 5-1/2 years. He is a native of Wilmington, Delaware and a graduate of Marietta College where he earned his bachelor's in biology, Springfield College where he earned his M.S. in Biomechanics and Movement Science, and the University of Delaware where he earned his Ph.D. in Biomechanics. Following his doctoral training, he completed a post-doc at the University of North Carolina at Chapel Hill and also completed his Doctorate in Physical Therapy at the University of Evansville. Butler and his wife, Sarah, daughters, Madeline and Emelia, son, Greyson, and dogs, Blue and Maple, reside in Hillsborough, NC. For more on Dr. Butler, be sure to check out https://www.mlb.com/cardinals/team/front-office/robert-butler , https://www.researchgate.net/profile/Robert-Butler-9/2 , https://www.linkedin.com/in/robert-butler-a00a0422b/ or @rjbutler_dptphd on Twitter Episode Sponsors: MedBridge: https://www.medbridgeeducation.com/brawn-body-training or Coupon Code "BRAWN" for 40% off your annual subscription! CTM Band: https://ctm.band/collections/ctm-band coupon code "BRAWN10" = 10% off! PurMotion: "brawn" = 10% off!! TRX: trxtraining.com coupon code "TRX20BRAWN" = 20% off GOT ROM: https://www.gotrom.com/a/3083/5X9xTi8k Red Light Therapy through Hooga Health: hoogahealth.com coupon code "brawn" = 12% off Ice shaker affiliate link: https://www.iceshaker.com?sca_ref=1520881.zOJLysQzKe Training Mask: "BRAWN" = 20% off at checkout https://www.trainingmask.com?sca_ref=2486863.iestbx9x1n Make sure you SHARE this episode with a friend who could benefit from the information we shared! Check out everything Dan is up to, including blog posts, fitness programs, and more by clicking here: https://linktr.ee/brawnbodytraining Liked this episode? Leave a 5-star review on your favorite podcast platform! --- Send in a voice message: https://podcasters.spotify.com/pod/show/daniel-braun/message Support this podcast: https://podcasters.spotify.com/pod/show/daniel-braun/support
Adults do not learn as children learn. They have prior experience, they have real-world problems to solve and, crucially, they can get up and walk out if they lose interest! In this week's episode of The Mind Tools L&D Podcast, Ross G and ‘Ross Dickie' are joined by Dr Carrie Graham. In amongst talk of bridge building, we discuss: · the core principles of Adult Learning Theory · how to apply Adult Learning Theory in the workplace · why keeping the life of the learner front-of-mind is so important. In ‘What I Learned This Week', Ross Dickie got smutty by diving into the acronym shift from SMET to STEM. See more here: McComas, W. F. (2014). STEM: Science, technology, engineering, and mathematics. The Language of Science Education: An Expanded Glossary of Key Terms and Concepts in Science Teaching and Learning, 102-103. Online at: link.springer.com/chapter/10.1007/978-94-6209-497-0_92 And Ross G got snarky, with a deep dive into the groundbreaking conspiracy theory that surrounds public information game Cat Park. You can play the game at: catpark.game/ Ross read about Cat Park in The Economist. See: economist.com/culture/2023/04/05/games-are-a-weapon-in-the-war-on-disinformation If you're interested in the conspiracy, you'll need to do your own research. We don't recommend. To find out more about Carrie, and to book a CALM consultation, visit: drcarriegraham.com/ For more from us, including access to our back catalogue of podcasts, visit mindtoolsbusiness.com. There, you'll also find details of our award-winning performance support toolkit, our off-the-shelf e-learning, and our custom work. Connect with our speakers If you'd like to share your thoughts on this episode, connect with our speakers on Twitter: · Ross Garner - @RossGarnerMT · Ross Dickie - @Ross DickieMT · Dr Carrie Graham - LinkedIn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Singularities against the Singularity: Announcing Workshop on Singular Learning Theory and Alignment, published by Jesse Hoogland on April 1, 2023 on LessWrong. We are excited to announce a two-week seminar on singular learning theory (SLT) and AI alignment, taking place from June 19th to July 2nd in Berkeley. SLT studies the relation between the geometry of the loss landscape and the computational properties of machines learning in that landscape. It builds on powerful theoretical and experimental machinery developed in physics (esp. solid-state physics), where it is the geometry of the energy landscape that determines the relevant properties of physical systems. During this workshop, we'll bring together singular learning theorists and alignment researchers to connect and further the applications of singular learning theory to alignment. The workshop aims to familiarize alignment researchers with SLT, seed new research collaborations, and develop tools based on SLT ideas. There will be talks by Daniel Murfet, Susan Wei, Shaowei Lin, Alexander Gietelink Oldenziel, Jesse Hoogland, and others. Time-Commitment Options We offer two different time-commitment options for participants. Full-Time: This track is designed for participants who want to fully engage with the material and delve deep into the concepts and applications of SLT. Intermittent: This track is designed for participants who want to get a taster of SLT and its applications for alignment without committing to the full two-week program. Several lectures will be open to a broader AI safety audience and will assume less familiarity with the subject. Participants will be able to join for just the afternoon or attend selected lectures throughout the two weeks. Overview The seminar consists of two parts. Week 1: "The Primer" The first week will provide a comprehensive introduction to SLT and its relevance to AI alignment. The material is designed to be approachable if you have the equivalent of a technical undergraduate degree (e.g., in CS, math, or physics). Participants will have the opportunity to learn from lectures covering topics such as thermodynamics, catastrophe theory, algebraic geometry, and SLT. There will also be sessions focused on experimental aspects of SLT and introductions to AI alignment and mechanistic interpretability. Weekend: Hackathon During the weekend, we will host a hackathon dedicated to developing novel SLT-based tools. Week 2: Advanced Topics and Collaboration The second week will delve deeper into the SLT and algebro-geometric foundations behind the toy models of superposition paper. This will serve as an application of the material covered in the first week, allowing participants to fully grasp the concepts and their relevance to AI alignment. The week will also feature presentations from researchers, open discussions, and opportunities for networking and collaboration. Registration If you're interested in participating, please register by filling out this form. Further updates on event details will be provided to registered participants. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
For Part 2 of Season 3, Episode 13 feat. Krista Jadro (Music Learning Specialist/Founder of Music Learning Academy) with a guest co-host, Siliana Chiliachka, you will hear our discussions on:✅ What is sequencing in the Music Learning Theory?✅ What is so different about MLT from traditional teaching/learning music?✅ Why is so much attention focused on tonal and rhythm audiation?and more!
I had a wonderful conversation with Krista Jadro, Music Learning Theory Specialist and Founder of Music Learning Academy, for this episode.
Here is the little teaser of The Piano Pod's latest episode with a guest, Krista Jadro: Music Learning Theory Specialist & Founder of Music Learning Academy, with a guest co-host, Siliana Chiliachka: Pianist/Educator and MLT Specialist.I had a wonderful conversation with Krista Jadro, Music Learning Theory Specialist and Founder of Music Learning Academy, for this episode.
The Teaching & Learning: Theory vs Practice podcast atGovernors State University celebrates Black HistoryMonth through displaying 28 days of Black voicesthrough the lens of scholars and educators.
Marco Valente (LinkedIn, Twitter) is a sustainability scientist and a facilitator. He's way of speaking into and navigating complexity are wonderfully rich. Marco shares some of his thinking on a wide range of topics. We speak about epistemic humility: how do we get to see the world through which lens? Loops of learning and Marcos decision diary. Wicked learning environments and how to navigate them. What do we do in the face of radical uncertainty? What does it mean to set a minimum set of rules or minimum specifications? What is the replication crisis and why does it not warrant for disbelief in science? What is the paradox embedded in evidence based decision making? Wonderfully rich conversation on how we get to know our world. Host: Amit Paul
The Edgar Dale learning theory is one of the most popular and well-known models when it comes to how people learn. The model has stood the test of time, and remains relevant even today. If you're looking to improve your learning skills, then this theory is a great place to start. So what is the Edgar Dale learning theory, and how can you apply it in your own life? Let's take a closer look.
In continuation of last month's theme of learning through conditioning, this month's education series is dedicated to social learning theory. Learn about the most popular psychologist of social learning theory and his infamous experiment in Stanford known as the Bobo doll experiment.
Adult Learning Theory can help L&D to improve learner engagement, knowledge retention and skills application. In this episode, Dr Carrie O. Graham explores this topic and its application to the priorities of today's L&D departments. KEY TAKEAWAYS Adult learning theory focuses on the fact that adults learn very differently from children. Improving information retention and ensuring new skills are appropriately and accurately applied cement that learning. Learning can thrive in informal settings. Design the learning experience so that it places the environment where they will use those skills. Make things real for people by comparing what they are doing now with what they will do when using the new skills or information. Incorporate opportunities for problem-solving. Understand what motivates your learners, Carrie shares a couple of examples to help you with this. BEST MOMENTS 'My work focuses on improving the engagement of the adult learner. Supporting their information retention.' 'Understand who they are as a learner.' 'If you're thinking long-term, using a quantitative assessment of learning is absolutely mandatory. ' 'As you are building your content, make references back to who they are as a learner.' VALUABLE RESOURCES The Learning and Development Podcast - https://podcasts.apple.com/gb/podcast/the-learning-development-podcast/id1466927523 L&D Masterclass Series: https://360learning.com/blog/ EPISODE RESOURCES You can follow and contact Carrie via: LinkedIn: https://www.linkedin.com/in/drcarriegraham/ Website: https://www.drcarriegraham.com ABOUT THE GUEST Carrie O. Graham Bio Owner of Carrie O. Graham Learning & Solutions, Dr. Graham helps subject matter experts improve learning outcomes by strategically integrating adult learning best practices. A published author, researcher, and conference presenter with 25+ years' experience in learning, instructional design, and leadership development across industries. Dr. Graham has a reputation of understanding problems that are stated and revealing what is not stated, then asking critical questions to help people uncover clear and insightful solutions. Not believing in the “one-size-fits-all” approach she customises solutions to support unique individual and organisational needs. ABOUT THE HOST David James David has been a People Development professional for more than 20 years, most notably as Director of Talent, Learning & OD for The Walt Disney Company across Europe, the Middle East & Africa. As well as being the Chief Learning Officer at 360Learning, David is a prominent writer and speaker on topics around modern and digital L&D. CONTACT METHOD Twitter: https://twitter.com/davidinlearning/ LinkedIn: https://www.linkedin.com/in/davidjameslinkedin/ L&D Collective: https://360learning.com/the-l-and-d-collective/ Blog: https://360learning.com/blog/ L&D Masterclass Series: https://360learning.com/blog/ See omnystudio.com/listener for privacy information.
I recently listened to John's Audio book "A Voice For The Horse" on Audible and found Johns explanation of the terms used in Equitation science learning theory invaluable. As a result I asked John to talk a little about it in this podcast. It is always great to catch up with John and as always there are stories thrown it about lessons learned from Tom Dorrance. Link to John's book on Audible
Express yourself fully and speak your truth!In this episode I have my girl Brittany Barcellos whom I met at an events workshop 2 years ago right before the pandemic shut down.How long have you had the fear of being seen? We talk about the attachment theory and how it has affected us as humans. We were raised thinking that we choose more for ourselves and as we get older we notice that we made certain choices because others wanted us to choose for them.Listen to this episode as we get detailed about the connection and disconnection from ourselves. IN THIS EPISODE, I TALK ABOUT:Are you playing small?Why are you hiding and is it a pattern?The attachment theory and the effect throughout life.Are you willing to be vulnerable and exposed? FOLLOW BRITTANY: INSTAGRAMFounder of LeadHer Helping Leaders LEAD THEMSELVES to the Life+Biz of their dreamsUnlocking more Freedom, Impact, & Abundance TOGETHER ✅ RESOURCES:Text: CREATE to 323-524-9857 to apply for my Get Up Girl Gang community If you enjoyed this episode, make sure and give us a five star rating and leave us a review on iTunes, Podcast Addict, Podchaser and Castbox. ✅ LET'S CONNECT:The Get Up GirlInstagramFacebookMonthly online fitness academy
Slam the Gavel welcomes William Dubree, Ph.D. on to the show. Dr. Dubree is a Project Developer, Learning Theory Applications to Human Behavior, Visionary and teacher. Dr. Dubree saw the importance of blending the fields of economics, behavioral psychology, healthcare and other sciences in developing a critical understanding and definition of the productive and well adaptive human operating in a complex social environment. Dr. Dubree has spent a 50 year career working in experimental psychology with special focus on the Learning Theory applications to behavior. His work in Quality of Life resulted in an expansion of the understandings in Learning Theory, especially the constructions of theoretical evolutionary models that explained the roots of the unaffected motivational drive systems in human behavior. His career has concentrated on large scale planning of Quality of Life projects and extensive research. As an experimental psychologist, Dr. Dubree specialized in the observation and measurement of behavior. Learning Theory spawned behavior solutions that worked in addressing even pathological behavior such as parental child abuse. Over the last 4 years, Dr. Dubree shifted his work to address the global epidemic of families destroyed by abusive parents where the insouciance of the Family Courts has become an ENABLER. Using his experimental background, he has crafted solutions to the victims of Parental Alienation through the lens of directive behavioral strategies, which recognize the pathology quickly and implement procedures that are both intuitively reasoned and we now know are effective modifiers that remove the victims from the experience field that caused and maintained the aberrant alienation influenced behaviors.To reach Dr. W. Dubree: http://www.hopeindarkness.me/appointment.htmSupport the show(https://www.buymeacoffee.com/maryannpetri)http://beentheregotout.com/https://monicaszymonik.mykajabi.com/Masterclass USE CODE SLAM THE GAVEL PODCAST FOR 10% OFF THE COURSEhttp://www.dismantlingfamilycourtcorruption.com/Music by: mictechmusic@yahoo.comSupport the show
Dr. April Perry, who was Cassie's graduate school advisor AND the person who originally taught her about the Happenstance Learning Theory, joins us to talk about her career journey, moments of happenstance she's experienced, the importance of reflection and making meaning of our experiences. Episode Topics:- Guest Intro (00:01:42)- The happenstance Learning Theory & Experiential Learning Cycle (00:04:56)- The happenstance of Dr. Perry & Cassie meeting (00:06:18)- Dr. April Perry's Career Story (00:09:34)- True Moments of Happenstance (00:16:44)- Navigating & Negotiating Careers with a Partner (00:21:07)- Daily POP (00:45:03) - Dr. April Perry's Book, A Practitioner's Guide to Supporting Graduate and Professional Students (00:50:14) Dr. April L. Perry is an Associate Professor in the M.Ed. Higher Education Student Affairs program at Western Carolina University. Her research is primarily on student identity development, career development, student transitions, and institutional initiatives for student success. As a practitioner, April has worked in graduate school administration, student leadership programs, parent & family programs, fundraising & marketing, academic tutoring services, and will be the Department Head of Human Services at WCU starting Summer 2022. She lives by the motto that the only thing better than watching someone grow is helping them grow. In 2016, April received the WCU award for Excellence in Graduate Student Mentoring, in 2017 and 2022 was named AGAPSS' Outstanding Professional, and in 2022 was honored with NASPA Faculty Council's Outstanding Support for Graduate Students Award.Links:Book: https://www.routledge.com/A-Practitioners-Guide-to-Supporting-Graduate-and-Professional-Students/Shepard-Perry/p/book/9780367639884 *Use Code FLA22 for 20% discount!LinkedIn: https://www.linkedin.com/in/aprilperry/Let's Connect:Instagram: @HappenstanceThePodcast & @CareerCoachCassie
Welcome to the Great Women in Compliance Podcast, co-hosted by Lisa Fine and Mary Shirley. Kristy Grant-Hart was one of the Great Women in Compliance podcast's inaugural guests, whose episode launched on 6 December 2018. She agreed to be on the show before we had a track record and reputation – we're grateful to her for supporting us right from the start. We invited Kristy, one of the Compliance community's most respected voices, to return to the show to share with us how adult learning theory can best be applied to your compliance training to make it more effective. Listen in to get a baseline understanding of adult learning theory and Kristy's tips for enhancing your training program. We also hear about how Compliance Competitor is going and what the ever moving and shaking Kristy is up to next. Are you attending Compliance Week's annual conference? The GWIC team of Lisa, Tom and Mary will all be speaking and look forward to saying hello to listeners of Compliance Podcast Network listeners in DC. The Great Women in Compliance Podcast is on the Compliance Podcast Network with a selection of other Compliance related offerings to listen in to. If you are enjoying this episode, please rate it on your preferred podcast player to help other likeminded Ethics and Compliance professionals find it. You can also find the GWIC podcast on Corporate Compliance Insights where Lisa and Mary have a landing page with additional information about them and the story of the podcast. Corporate Compliance Insights is a much appreciated sponsor and supporter of GWIC, including affiliate organization CCI Press publishing the related book; “Sending the Elevator Back Down, What We've Learned from Great Women in Compliance” (CCI Press, 2020). You can subscribe to the Great Women in Compliance podcast on any podcast player by searching for it and we welcome new subscribers to our podcast. Join the Great Women in Compliance community on LinkedIn here.
Think back to the last time you learned something. Was there a concept or subject that just clicked for you? You understood it clearly, and it made sense. Now, think about a time at #school when you couldn't understand a concept or subject no matter how hard you tried. Well in today's video I dive into some learning theory to help you to understand how we #process information and how different ways of teaching and learning can impact how quickly and effectively you #learn. Learn more at virti.com/alex
Show Summary:In this episode of The STEM Space, Natasha and Claire discuss their excitement around Space Club: a STEM program designed to excite your students about real-world STEM and space exploration. Whether you use Space Club during school hours, after school, or even at your local library, tune in to this episode to learn about the benefits of participating in Space Club, how it relates to various learning theories, and how you can launch your students forward with a Mission to the Moon. Links from the Show:Bring Space Club to your school! If you are interested in a space-themed curriculum, check out our Space Club missions that connect students from around the world! In partnership with NASA, Vivify STEM runs semester-long space missions that include engineering design challenges, weekly career chats, a live leaderboard, career chats with astronauts and engineers, and a chance to win robots and telescopes! From afterschool clubs to STEM classes, teachers can implement Space Club either in-person or through distance learning. How to Teach Growth Mindset and Failing Forward Learning Theories podcast episodes:Episode 13 on Behavioral Learning TheoryEpisode 14 on Developmental Learning TheoryEpisode 15 on Constructivist Learning TheoryEpisode 16: Everything You Know About Learning Styles Is WrongLearning Theories Flow Chart in our free resource library (Must first sign up for our free newsletter here).Not all STEM is equal (podcast Episode 1) (blog post)THE STEM SPACE SHOWNOTES: https://www.vivifystem.com/thestemspace/2021/37-the-learning-potential-of-space-clubTHE STEM SPACE FACEBOOK GROUP: https://www.facebook.com/groups/thestemspace/VIVIFY INSTAGRAM: https://www.instagram.com/vivifystemVIVIFY FACEBOOK: https://www.facebook.com/vivifystemVIVIFY TWITTER: https://twitter.com/vivifystem
Buckle up: it's gonna be a bumpy ride as your tour guides, Katherine and Holland, guide you through some major theories of learning in the education field. Bring along a Whiteboard Marker-ita as we begin with behaviorism and end with critical theories. Maybe you'll recognize some oldies but goodies along the way (Vygotsky, we're looking at you)!