Podcasts about gradients

Multi-variable generalization of the derivative of a function

  • 113PODCASTS
  • 274EPISODES
  • 1h 12mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about gradients

Latest podcast episodes about gradients

Brain Inspired
BI 212 John Beggs: Why Brains Seek the Edge of Chaos

Brain Inspired

Play Episode Listen Later May 21, 2025 93:34


Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called The Cortex and the Critical Point: Understanding the Power of Emergence, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days. On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more. Beggs Lab. Book: The Cortex and the Critical Point: Understanding the Power of Emergence Related papers Addressing skepticism of the critical brain hypothesis Papers John mentioned: Tetzlaff et al 2010: Self-organized criticality in developing neuronal networks. Haldeman and Beggs 2005: Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Bertschinger et al 2004: At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. Legenstein and Maass 2007: Edge of chaos and prediction of computational performance for neural circuit models. Kinouchi and Copelli 2006: Optimal dynamical range of excitable networks at criticality. Chialvo 2010: Emergent complex neural dynamics.. Mora and Bialek 2011: Are Biological Systems Poised at Criticality? Read the transcript. 0:00 - Intro 4:28 - What is criticality? 10:19 - Why is criticality special in brains? 15:34 - Measuring criticality 24:28 - Dynamic range and criticality 28:28 - Criticisms of criticality 31:43 - Current state of critical brain hypothesis 33:34 - Causality and criticality 36:39 - Criticality as a homeostatic set point 38:49 - Is criticality necessary for life? 50:15 - Shooting for criticality far from thermodynamic equilibrium 52:45 - Quasi- and near-criticality 55:03 - Cortex vs. whole brain 58:50 - Structural criticality through development 1:01:09 - Criticality in AI 1:03:56 - Most pressing criticisms of criticality 1:10:08 - Gradients of criticality 1:22:30 - Homeostasis vs. criticality 1:29:57 - Minds and criticality

Demystifying Science
The Electric Universe Inside You - Dr. Michael Hughes, DemystifySci #338

Demystifying Science

Play Episode Listen Later Apr 24, 2025 122:18


Michael Hughes is a postdoctoral researcher at St. Jude's Children's Hospital who studies the overlooked role of water in living systems. His work builds on a growing body of research suggesting that water is not just a passive solvent, but a highly structured, information-rich medium. Hughes proposes that under normal biological conditions, water's ability to form liquid crystalline phases, hydration shells, and coherent domains allows it to act more like an information storage system than an inert backdrop to biochemistry. Drawing on ideas like EZ water, interfacial water dynamics, and liquid-liquid phase separation inside cells that span thinkers from Gilbert Ling to Gerald Pollack, Hughes argues that health emerges from the fine-tuned electrical and structural properties of intracellular water. When this water-protein-electrical system breaks down, disease can result. He outlines a new approach to the body that's rooted in biophysics, not just molecular biology, which he believes might offer novel ways to maintain health and slow aging by restoring the electromagnetic coherence of the body.MAKE HISTORY WITH US THIS SUMMER:https://demystifysci.com/demysticon-2025PATREON https://www.patreon.com/c/demystifysciPARADIGM DRIFThttps://demystifysci.com/paradigm-drift-showPreprint of Michael's manuscript "Rethinking Cellular Organization: Phase Separation as a Unifying Principle in Molecular Biology" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5171413Dr. Thomas Seyfried podcast w/ DemystifyScihttps://www.youtube.com/watch?v=rxHkXP3G3y4"Live Streaming of a Single Cell's Life over a Local pH Monitoring Nanowire Waveguide" https://pubs.acs.org/doi/10.1021/acs.nanolett.2c02185Rudolf Steiner's Agriculture Course: https://www.youtube.com/watch?v=fwSa8Lpy9-A 00:00 Go! 00:09:54 – Water, Fields & the Electric Body 00:15:01 – Biochemistry's Unifying Principles 00:21:26 – Water, Glutamine & Metabolism 00:23:02 – Liquid-Liquid Phase Separation 00:25:34 – Hydration & Cellular Structure 00:28:08 – Amyloids in Health & Disease 00:33:52 – Environment Shapes Amyloids 00:39:37 – Osmosis, Metabolism & Flow 00:41:04 – Soil Over Seed: Health Revolution 00:42:24 – Evolving Scientific Paradigms 00:46:08 – Cell Theory & Liquid Separation 00:50:34 – Rethinking Genetic Primacy 00:56:12 – Biochemical Research Challenges 01:01:58 – Terrain Theory & Post-Pandemic Trust 01:13:16 – Technology, Ethics & Evolution 01:16:06 – Metabolism as Societal Metaphor 01:21:09 – Lifespan, Healthspan & Food Systems 01:23:25 – Terrain Theory & Neurodegenerative Disease 01:25:10 – pH, Buffers & Biochemical Balance 01:30:03 – Cellular Function & pH Dynamics 01:35:17 – Biochemical Cell Environment 01:39:06 – Intracellular Phase Separation 01:43:07 – Insulin, Gradients & Phase Transitions 01:45:12 – Water, Food & Environmental Impacts 01:48:14 – Personal Diet & Exercise Design 01:57:09 – Experimenting with Your Health 02:00:11 – Dyno comp! #electricuniverse #biochemistry, #structuredwater , #cellularhealth, #watermemory, #metabolism, #quantumhealth, #naturalmedicine, #integrativemedicine, #nutritionalscience, #epigenetics, #philosophypodcast, #sciencepodcast, #longformpodcast ABOUS US: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. SOCIAL: - Discord: https://discord.gg/MJzKT8CQub- Facebook: https://www.facebook.com/groups/DemystifySci- Instagram: https://www.instagram.com/DemystifySci/- Twitter: https://twitter.com/DemystifySciMUSIC: -Shilo Delay: https://g.co/kgs/oty671

On the Mark Golf Podcast
Will Stubbs On Zen Golf and Purpose-filled Practice

On the Mark Golf Podcast

Play Episode Listen Later Apr 11, 2025 44:22


Will Stubbs (England) is the Managing Director of Zen Golf.   He holds a BSc in Sports Development with Coaching and an MSc in Sports and Execise Science.  His Masters thesis research identified how golf coaching paradigms must change from technique-driven practice to adhere to the dynamic nature where the performer is interacting with the ever-changing environment.  This makes him one of the foremost minds in Skill Acquisition coaching Based in England and established over 20 years ago, Zen is more than a ‘moving floor'. Their unique Adaptive Terrain Technology (ATT) instantly transforms the golf landscape and enjoyment of the game. Zen Golf gives you the world's most true-to-life, most immersive indoor golf experience by recreating the slopes you find on a real golf course. Gradients of all kinds, on every shot and every putt, on the Zen Tour performance playing surfaces. How Technique changes given Undulation and Slope changes How Biomechanics change off varying Slopes How Practice should include work off Uneven lies The Skill of "reading" the game and the environment and making the requisite technique adjustments necessary for success Physical skill Acquistion vs Mental Skill Acquisition Building Confidence and Competence at the same time Using Uneven Lies to make Golf-swing Adjustments Using Zen Green Stages to Improve Green-reading Recreating Environments to Improve On-course Performance How to Create a completely Representative Learning Environment incl.: Gapping Clubs and Figuring Distances off Uneven Lies.  Putting on Slopes. Learning Skills over Technique. Solving Environmental Challenges. Learning the Skill of Adaptability. Will also descibes the types of Zen Green Stages and gives a green-reading lesson lesson on a Zen Putting Green.  To watch this and see Will give a demonstration search and subscribe to Mark Immelman on YouTube.    

Bittensor Guru
S2E8 - Subnet 56 Gradients.io w/ wanderingweights

Bittensor Guru

Play Episode Listen Later Mar 28, 2025 68:05


The team at Rayon Labs have done it again with Gradients led by wanderingweights who joins the pod to discuss how he and his team have democratized AI model building with their "couple of clicks" no-code training subnet. This is one of the most groundbreaking projects on Bittensor that, in only a few months on the network, can already out-train the establishment. Think AutoML on Bittensor and you're on the right track but still selling this group way short. Enjoy! Video and links below.  https://x.com/KeithSingery/status/1905573818942263756 https://gradients.io https://github.com/rayonlabs/G.O.D https://rayonlabs.ai https://x.com/rayon_labs https://bittensor.guru

Fringe Radio Network
Bioelectricity I - Happy Fools Podcast

Fringe Radio Network

Play Episode Listen Later Feb 21, 2025 98:30


Bioelectricity refers to the electrical phenomena generated and utilized by living cells and tissues, encompassing processes like nerve impulses, muscle contractions, and cellular communication. It arises from the movement of ions across membranes, creating voltage gradients that play critical roles in physiology and development. Researchers are exploring how modulating these electrical signals can influence healing, regeneration, and even the formation of complex organ systems, shedding light on the remarkable interplay between biology and electricity.

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Nolan and Kelli talk about AI's role in hiring processes, sparked by Anthropic's policy against using AI in job applications. They explore the tension between AI as a valuable tool and its potential for interview deception, ultimately advocating for a more integrated approach where candidates openly demonstrate their AI usage skills rather than attempting to restrict or hide it.Email us your ‘Dear Heretics' questions: hrheretics@turpentine.co**For coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.—Support HR Heretics Sponsor: Metaview is the AI assistant for interviewing. Metaview completely removes the need for recruiters and hiring managers to take notes during interviews—because their AI is designed to take world-class interview notes for you. Team builders at companies like Brex, Hellofresh, and Quora say Metaview has changed the game—see the magic for yourself: https://www.metaview.ai/heretics—KEEP UP WITH NOLAN + KELLI ON LINKEDINNolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—RECOMMENDATIONS FOR THIS PODCAST:Gizmodo Article: “Anthropic Wants You to Use AI—Just Not to Apply for Its Jobs”https://gizmodo.com/anthropic-wants-you-to-use-ai-just-not-to-apply-for-its-jobs-2000558490Anthropic: https://www.anthropic.com/Final Round AI: https://www.finalroundai.com/Sora: https://openai.com/sora/—TIMESTAMPS:(00:00) Intro(01:26) Initial Reactions and Perspectives to Anthropic's Policy(02:23) AI as a Tool vs. Replacement for Human Skills(03:36) AI-Powered Interview Cheating.(04:11) Practical AI Usage Throughout the Application Process(06:21) Sponsor: Metaview(08:15) Authenticity vs. AI Assistance(10:09) Polarities and Gradients of AI Usage(11:16) The Problem of Cheating and the "Open Book" Interview Approach(12:30) Using AI During In-Person Interviews(13:33) Adapting to AI in Hiring and Education(14:18) Audience Engagement and Future Solutions(15:03) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4

god ceo new york amazon spotify time world europe google ai china apple vision pr voice future speaking san francisco new york times phd video thinking chinese simple data predictions elon musk iphone surprise impact legal code chatgpt tesla reflecting memory ga discord busy reddit lgbt cloud flash stem honestly ab pros jeff bezos windows excited researchers unicorns lower ip tackling sort survey insane tier cto vc whispers applications doc seal signing fireworks f1 genie academic openai sf gemini organizing nvidia ux api assembly davos frontier chrome makes scarlett johansson ui mm turbo gpt bash soda aws ml lama dropbox mosaic creative writing github drafting reinvent canvas 1b bolt apis lava ruler exact stripe dev pico strawberry hundred wwdc vm sander bt flux vcs taiwanese 200k moto arr gartner opus assumption sora google docs nemo parting sam altman blackwell llm google drive sombra gpu opa tbd ramp 3b elia elo agi gnome 5b estimates midjourney bytedance leopold dota ciso haiku dx sarah silverman coursera rag gpus sonnets george rr martin cypher quill getty cobalt sdks deepmind ilya perplexity noam sheesh grok v2 ttc alessio future trends anthropic lms satya r1 ssi stack overflow 8b rl emerging trends itc theoretically sota vo2 yi replicate suno mistral veo black forest inflection graphql aitor xai brain trust databricks gpts chinchillas adept nosql mcp jensen huang grand central ai models grand central station hacker news zep hacken ethical issues cosign claud ai news gpc distro lubna autogpt neo4j tpu o3 jeremy howard gbt o1 gpd quent heygen gradients exa loras 70b langchain minimax neurips 400b jeff dean 128k elos gemini pro cerebras code interpreter icml john franco ai winter lstm r1s aws reinvent muser latent space pypy dan gross nova pro paige bailey noam brown quiet capital john frankel
LessWrong Curated Podcast
“Gradient Routing: Masking Gradients to Localize Computation in Neural Networks” by cloud, Jacob G-W, Evzen, Joseph Miller, TurnTrout

LessWrong Curated Podcast

Play Episode Listen Later Dec 9, 2024 25:15


We present gradient routing, a way of controlling where learning happens in neural networks. Gradient routing applies masks to limit the flow of gradients during backpropagation. By supplying different masks for different data points, the user can induce specialized subcomponents within a model. We think gradient routing has the potential to train safer AI systems, for example, by making them more transparent, or by enabling the removal or monitoring of sensitive capabilities.In this post, we: Show how to implement gradient routing.Briefly state the main results from our paper, on... Controlling the latent space learned by an MNIST autoencoder so that different subspaces specialize to different digits;Localizing computation in language models: (a) inducing axis-aligned features and (b) demonstrating that information can be localized then removed by ablation, even when data is imperfectly labeled; andScaling oversight to efficiently train a reinforcement learning policy even with [...] ---Outline:(01:48) Gradient routing(03:02) MNIST latent space splitting(04:31) Localizing capabilities in language models(04:36) Steering scalar(05:46) Robust unlearning(09:06) Unlearning virology(10:38) Scalable oversight via localization(15:28) Key takeaways(15:32) Absorption(17:04) Localization avoids Goodharting(18:02) Key limitations(19:47) Alignment implications(19:51) Robust removal of harmful capabilities(20:19) Scalable oversight(21:36) Specialized AI(22:52) ConclusionThe original text contained 1 footnote which was omitted from this narration. --- First published: December 6th, 2024 Source: https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation --- Narrated by TYPE III AUDIO. ---Images from the article:

Employing Differences
Employing Differences, Episode 219: Are you open to some feedback?

Employing Differences

Play Episode Listen Later Jul 23, 2024 17:36 Transcription Available


"Before I give somebody some feedback, I actually want to ask them, are they open to it? I want to ask this question because if my intention is to actually share some information that's going to help them to do something effective in the future, then I need to know that they're in a space where they could hear it." Karen & Paul talk about being at choice about how, when, and even if we receive feedback.

The Growing Season
The Growing Season, July 20, 2024 - The Colour Series: Bi-Colour and Colour Gradients

The Growing Season

Play Episode Listen Later Jul 20, 2024 53:43


Employing Differences
Employing Differences, Episode 218: Why don't they do this?

Employing Differences

Play Episode Listen Later Jul 16, 2024 20:09 Transcription Available


"When we go to work with somebody else, it's pretty easy to find myself saying, 'Man, I wish they would just show up on time." Or, 'Man, I wish they would just be organized and not be constantly asking me for the stuff that I already sent them.' Or, 'I wish they would dream along with me so that I'm not off by myself coming up with all the big ideas.'"Paul & Karen talk about the challenge of working with other people who aren't exactly like us.

Employing Differences
Employing Differences, Episode 217: How can I repair trust?

Employing Differences

Play Episode Listen Later Jul 9, 2024 22:03 Transcription Available


"We shouldn't expect that things will always go perfectly, that we will never fail to deliver on our obligations. The need to repair trust isn't a sign that this is a broken, dysfunctional relationship. It's normal, it's just unfortunate."Karen & Paul discuss rebuilding trust when we didn't do something we were supposed to.

Employing Differences
Employing Differences, Episode 216: How do we share bad news?

Employing Differences

Play Episode Listen Later Jul 2, 2024 23:36 Transcription Available


"The way you share bad news has an impact on the relationship; on that space between; in that collaborative space. So we want to explore today ways that you can share that bad news in a way that works; in a way that maybe avoids some of the things that we think might go wrong; that actually helps us to work through the sharing of the bad news together."Paul & Karen share advice for talking about hard things in ways that strengthen the working relationship.This is the second of a two-part series that started with Episode 215.

The ATC Doublecut with Micah Woods
Checking phosphorus gradients on putting greens with an upper and lower rootzone sample

The ATC Doublecut with Micah Woods

Play Episode Listen Later Jun 26, 2024 36:00


Soil P, pH, and salinity are a few things that likely vary by depth in your rootzone. I've started recommending that 20% of putting greens tested be done with a sample divided at the 5 cm depth. Send the lab a 0-5 cm and 5-10 cm depth sample from that green, rather than the standard 0-10 cm depth sample. I discussed this blog post: https://www.asianturfgrass.com/post/reconstructing-soil-p-from-disassembled-soil-samples/The MLSN page on the ATC site: https://www.asianturfgrass.com/mlsn/Read more about all kinds of turfgrass topics at https://www.asianturfgrass.com/Get ATC newsletters at https://www.asianturfgrass.com/newsletter/Turfgrass information and decision-making tools at https://www.paceturf.org/PACE Turf YouTube channel at https://www.youtube.com/user/paceturfATC's YouTube channel at https://www.youtube.com/asianturfgrasscenter

Employing Differences
Employing Differences, Episode 215: Is it time to share bad news?

Employing Differences

Play Episode Listen Later Jun 25, 2024 19:27 Transcription Available


"How relevant is it to them in terms of their decision-making? What sort of agency are they gonna have when this comes out? Because if it's something that's uncertain, that they can't actually do anything about, and they're just gonna stew on it and cogitate, ruminate, and stress about it, then maybe it makes sense not to share it until we know it's more likely that it's actually gonna happen." Karen & Paul when – and when not – to share with a group things that they don't want to hear.This is the first of a two-part series that concludes with Episode 216.

Contemporánea
49. Notaciones

Contemporánea

Play Episode Listen Later Jun 22, 2024 18:42


La renovación musical de la segunda mitad del siglo XX se hace visible en la originalidad radical de las partituras donde se escriben sus obras. La notación se asemeja ahora a trabajos científicos o de ingenieros, a dibujos o poemas experimentales, a esquemas o fórmulas matemáticas._____Has escuchadoBerlino (1980-1981) / Terry Fox. Apollo Records (1988)Gamelan Coming & Going (1985) / Philip Corner. Philip Corner y Evan Schwartzmann, piano y voz. Grabado en la Rutgers University, MGSA, New Brunswick (EE. UU.), noviembre de 1985. Ants (2017)Gradients of Detail (2005-2006) / Chiyoko Szlavnics. Ensemble musikFabrik; Peter Rundel, director. Maria de Alvear World Edition (2022)The Seasons: Vermont “Spring” (1980-1982) / Malcom Goldstein. Malcom Goldstein, violín; Robert Black, contrabajo; Mark Steven Brooks, varios instrumentos; Tom Guralnick, oboe y vientos; Joseph Celli, corno inglés; Brian Johnson, percusión; Kenneth Karpowicz, acordeón. XI Records (1998)_____Selección bibliográficaBARRETT, Richard, “Notation as Liberation”. Tempo, vol. 68, n.º 268 (2014), pp. 61-72*BISERNA, Elena, Walking from Scores: An Anthology of Text and Graphic Scores to Be Used while Walking. Les Presses du Réel, 2022*BLACK, Robert, “Contemporary Notation and Performance Practice: Three Difficulties”. Perspectives of New Music, vol. 22, n.º 1-2 (1983), pp. 117-146*BROWN, Earle, “The Notation and Performance of New Music”. The Musical Quarterly, vol. 72, n.º 2 (1986), pp. 180-201*BUJ CORRAL, Marina, “Sinestesias en la notación gráfica: lenguajes visuales para la representación del sonido”. Cuadernos de Música, Artes Visuales y Artes Escénicas, vol. 14, n.º 1 (2019), pp. 45-64*—, “Confluencias artísticas y experimentación: la notación gráfica en España”. En: Poéticas encontradas: convergencias artísticas en la música de los siglos XX y XXI. Editado por Belén Pérez Castillo y Ruth Piquer Sanclemente. Comares, 2023*DAVIES, Stephen, “Notation”. En: The Routledge Companion to Philosophy and Music. Editado por Theodore Gracyk y Andrew Kania. Routledge, 2011*EVARTS, John, “The New Musical Notation: A Graphic Art?”. Leonardo, vol. 1, n.º 4 (1968), pp. 405-412*GARCÍA FERNÁNDEZ, Isaac Diego, “El grafismo musical en la frontera de los lenguajes artísticos”. Sinfonía Virtual: Revista de Música Clásica y Reflexión Musical, n.º 5 (2007), consultada el 21 de junio de 2023: [Web]IGES, José y Manuel Olveira, El giro notacional. Cendeac, 2019*KOJS, Juraj, “Notating Action-Based Music”. Leonardo Music Journal, vol. 21 (2011), pp. 65-72*MESTRES QUADRENY, Josep María, Tot muda de color al so de la flauta. Fundació Joan Brossa i Ajuntament de Barcelona, 2010*PISARO, Michael, “Writing Music”. En: The Ashgate Research Companion to Experimental Music. Editado por James Saunders. Ashgate, 2009*POPE, Stephen Travis, “Music Notations and the Representation of Musical Structure and Knowledge”. Perspectives of New Music, vol. 24, n.º 2 (1986), pp. 156-189*RIVIÈRE, Henar, “José Luis Castillejo y la escritura moderna”. En: José Luis Castillejo y la escritura moderna. Editado por José María Lafuente. Ediciones La Bahía, 2018*SMITH, Sylvia y Stuart Smith, “Visual Music”. Perspectives of New Music, vol. 20, n.º 1-2 (1981), pp. 75-93*STONE, Kurt, “Problems and Methods of Notation”. Perspectives of New Music, vol. 1, n.º 2, (1963), pp. 9-31*VALLE, Andrea, Contemporary Music Notation: Semiotic and Aesthetic Aspects. Logos Verlag Berlin, 2018VILLA ROJO, Jesús, Juegos gráfico-musicales. Editorial Alpuerto, 1982*—, Notación y grafía musical en el siglo XX. Iberautor, 2003*WEIBEL, Peter et al. (eds), From Xenakis's UPIC to Graphic Notation Today. Hatje Cantz, 2020 *Documento disponible para su consulta en la Sala de Nuevas Músicas de la Biblioteca y Centro de Apoyo a la Investigación de la Fundación Juan March

Employing Differences
Employing Differences, Episode 214: What went wrong?

Employing Differences

Play Episode Listen Later Jun 18, 2024 23:01 Transcription Available


"Sometimes there are actual consequences that do need to go along when things go wrong, but it's far less often than we actually seem to think when somebody is saying there needs to be more accountability." Paul & Karen discuss what do to when we did have shared expectations about what we needed to do, but we still didn't get the results we wanted.This is the second of a two-part series that started with Episode 213.

Employing Differences
Employing Differences, Episode 213: Do we need more accountability?

Employing Differences

Play Episode Listen Later Jun 11, 2024 21:23 Transcription Available


"The statement 'We need more accountability' usually comes up when we've deviated in some way from our perfect idea of how we're going to work together. Which is to say, we start with this shared understanding of what it is we're all each individually going to do and deliver and who's going to do, you know, what by when and how well sorts of things."Karen & Paul discuss techniques for setting expectations for interdependent work.This is the first of a two-part series that concludes with Episode 214.

Employing Differences
Employing Differences, Episode 212: How do we find alignment?

Employing Differences

Play Episode Listen Later Jun 4, 2024 21:21 Transcription Available


"For this particular decision, how aligned do we all need to be around it in order for the decision to be effective? In order for the idea that we're trying to move forward to really move into reality? Because that threshold of alignment varies with the type of thing that we're trying to do." Paul & Karen talk about helping a group converge on decisions.This is the finale of a three-part series that also contains Episodes 210 and 211.

Employing Differences
Employing Differences, Episode 211: How do we find disagreement?

Employing Differences

Play Episode Listen Later May 28, 2024 19:55 Transcription Available


"That's where the actual solutions to your complex problems are: in the things that we disagree about, that we want to run away from because other people's ideas or perspectives are so different from the ones that we have. We can't just look at them and go, "Okay, fine, whatever," and move on. We actually need to come to understand them – because that's where durable solutions to complex problems actually come from." Karen & Paul share advice for supporting a group in seeking out and exploring disagreement.This is the middle of a three-part series that starts with Episode 210 and concludes with 212.

Ophthalmology Journal
Aqueous Macrophages Contribute to Conserved CCL2 & CXCL10 Gradients in Uveitis

Ophthalmology Journal

Play Episode Listen Later May 23, 2024 16:44


Uveitis is a heterogenous group of inflammatory eye diseases for which current cytokine-targeted immune therapies are effective for only a subset of patients. Dr. Edmund Tsui is joined by Dr. Lynn M. Hassman and MD/PhD student Joseph B. Lin to explore potential common underlying mechanisms that exist for immune cell recruitment in uveitis in their Ophthalmology Science article, “Aqueous macrophages contribute to conserved CCL2 and CXCL10 gradients in uveitis” Aqueous Macrophages Contribute to Conserved CCL2 and CXCL10 Gradients in Uveitis. Lin, Joseph B. et al. Ophthalmology Science, Volume 4, Issue 4. The Ophthalmology-family of journals is now on Instagram. Follow aaojournal for clinical images, research articles, news, editorials, podcasts, and more! Sign up for the next Ophthalmology Journal Virtual Club on June 19, 2024, at https://store.aao.org/ophthalmology-virtual-journal-club.html

Employing Differences
Employing Differences, Episode 210: What happens between idea and reality?

Employing Differences

Play Episode Listen Later May 21, 2024 16:03 Transcription Available


"Whatever that thing is, we're bringing our own flavor to it, and we don't realize that we're painting it. A lot of that flavor painting is unconscious. And so it's not that I think I'm sneaking my way in, or you think you're sneaking your way in. It's that we don't notice how much of our own kind of criteria or condition that we have on it in our own mind." Paul & Karen discuss things that predictably happen when we start working together to make an idea a reality.This is the beginning of a three-part series that continues with Episodes 211 and 212.

Employing Differences
Employing Differences, Episode 209: Who will do this?

Employing Differences

Play Episode Listen Later May 14, 2024 26:43 Transcription Available


"We often say, 'Who's willing to do this?' And we wait awkwardly for someone to raise their hand. Willingness matters, but it is not the only concern that we have." Karen & Paul share techniques and advice for filling roles within a group.

Employing Differences
Employing Differences, Episode 208: How deep do we go?

Employing Differences

Play Episode Listen Later May 7, 2024 15:44 Transcription Available


"I don't get to decide that for the group. I set prompts that might be deeper or shallower. I set structures that facilitate depth. Some of them foster depth, and some of them don't as much. So I do make choices about how deep we're going to go. But at the end of the day, I actually only am making a decision about how deep I'm inviting people to go."Paul & Karen discuss ways to help people do the internal work required to create change within a group.

Employing Differences
Employing Differences, Episode 207: Is this a safe space?

Employing Differences

Play Episode Listen Later Apr 30, 2024 18:08 Transcription Available


"It's very difficult for me to assess for other people, 'Is this a space where it is safe for them to take risks?' Because they may have very different risk profiles and very different things that make things dangerous. And they may have much more severe consequences that could hit them as a result of things that they bring up. And so I think that I can say that I feel safe in this space, but I don't think I can ever say that for anybody else."Karen & Paul discuss risks, consequences, and safety in group discussions.

The Nonlinear Library
AF - ProLU: A Pareto Improvement for Sparse Autoencoders by Glen M. Taggart

The Nonlinear Library

Play Episode Listen Later Apr 23, 2024 8:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ProLU: A Pareto Improvement for Sparse Autoencoders, published by Glen M. Taggart on April 23, 2024 on The AI Alignment Forum. Abstract This paper presents ProLU, an alternative to ReLU for the activation function in sparse autoencoders that produces a pareto improvement over the standard sparse autoencoder architectures and sparse autoencoders trained with Sqrt(L1) penalty. Introduction SAE Context and Terminology Learnable parameters of a sparse autoencoder: Wenc : encoder weights Wdec : decoder weights benc : encoder bias bdec : decoder bias Training Notation: Encoder/Decoder Let encode(x)=ReLU((xbdec)Wenc+benc)decode(a)=aWdec+bdec so that the full computation done by an SAE can be expressed as SAE(x)=decode(encode(x)) An SAE is trained with gradient descent on where λ is the sparsity penalty coefficient (often "L1 coefficient") and P is the sparsity penalty function, used to encourage sparsity. P is commonly the L1 norm ||a||1 but recently l12 has been shown to produce a Pareto improvement on the L0 and CE metrics. Sqrt(L1) SAEs There has been other work producing pareto improvements to SAEs by taking P(a)=||a||1/21/2 as the penalty function. We will use this as a further baseline to compare against when assessing our models. Motivation: Inconsistent Scaling in Sparse Autoencoders Due to the affine translation, sparse autoencoder features with nonzero encoder biases only perfectly reconstruct feature magnitudes at a single point. This poses difficulties if activation magnitudes for a fixed feature tend to vary over a wide range. This potential problem motivates the concept of scale consistency: A scale consistent response curve The bias maintains its role in noise suppression, but no longer translates activation magnitudes when the feature is active. The lack of gradients for the encoder bias term poses a challenge for learning with gradient descent. This paper will formalize an activation function which gives SAEs this scale-consistent response curve, and motivate and propose two plausible synthetic gradients, and compare scale-consistent models trained with the two synthetic gradients to standard SAEs and SAEs trained with Sqrt(L1) penalty. Scale Consistency Desiderata Notation: Centered Submodule The use of the decoder bias can be viewed as performing centering on the inputs to a centered SAE then reversing the centering on the outputs: SAE(x)=SAEcent(xbdec)+bdec SAEcent(x)=ReLU(xWenc+benc)Wdec Notation: Specified Feature Let Wi denote the weights and bienc the encoder bias for the i-th feature. Then, let SAEi(x)=SAEicent(xbdec)+bdec where SAEicent(x)=ReLU(xWienc+bienc)Widec Conditional Linearity Noise Suppresion Threshold Methods Proportional ReLU (ProLU) We define the Proportional ReLU (ProLU) as: Backprop with ProLU: To use ProLU in SGD-optimized models, we first address the lack of gradients wrt. the b term. ReLU gradients: For comparison and later use, we will first consider ReLU: partial derivatives are well defined for ReLU at all points other than xi=0: Gradients of ProLU: Partials of ProLU wrt. m are similarly well defined: However, they are not well defined wrt. b, so we must synthesize these. Notation: Synthetic Gradients Let fx denote the synthetic partial derivative of f wrt. x, and f the synthetic gradient of f, used for backpropagation as a stand-in for the gradient. Different synthetic gradient types We train two classes of ProLU with different synthetic gradients. These are distinguished by their subscript: ProLUReLU ProLUSTE They are identical in output, but have different synthetic gradients. I.e. ReLU-Like Gradients: ProLUReLU The first synthetic gradient is very similar to the gradient for ReLU. We retain the gradient wrt. m, and define the synthetic gradient wrt. b as follows: Thresh STE Derived Gradients: ProLUSTE The second class of Pro...

Employing Differences
Employing Differences, Episode 206: Am I controlling?

Employing Differences

Play Episode Listen Later Apr 23, 2024 19:05 Transcription Available


"We are assuming that no one wants to be a controlling boss or to use the structural power that they have in a way that would cause someone else to feel controlling or that would diminish somebody else's agency to speak or do their job or say things. It doesn't mean that that person who has authority doesn't want to make decisions and use their decision-making authority, but they pretty reliably don't want it to impact the relationships or have other people feel like they've been put in their place or be put down."Paul & Karen discuss hierarchy, requests, and how to approach them in ways that foster teamwork.

Sounds of SAND
#81 Ecology of Care: DRĖĖĖMY

Sounds of SAND

Play Episode Listen Later Apr 18, 2024 65:44


Reem (DRĖĖĖMY) Abdou is a native Egyptian international interdisciplinary sound artist, embodiment and meditation guide, curator, cultural worker, and community building founder of the inclusive global impact agency for women+ & BIPOC holistic artists: The Collective BAE. As an intentional DJ and spoken word poet, her work harnesses music, movement, and meditation to activate real shifts at the intersection of transformational creativity, social and healing justice, and ecosystem consciousness. Links: www.dreeemy.com instagram.com/dreeemy www.collectivebae.com Upcoming projects: Join The BAE (RE)MEMBERSHIPS: An Ecology of Care for Conscious Creatives. We'll be launching a full training course this May. The release of the 2nd EP: SALTWATERS in the Mother & Water project. It will be released this May. Topics: 00:00 — Introduction 03:00 — Ancestry & Dream work 06:45 — Communities 11:19 — Bass Yoga 17:19 — Gradients & Binaries 23:17 — Ecologies of Care 32:33 — Sacred Activism 36:51 — Post-COVID Shift 45:31 — Egyptian Lineage 53:44 — Upcoming Projects Support the mission of SAND the production of this podcast by becoming a SAND Member:

Authentic Biochemistry
Immune Cell Biochemistry I. T lymphocyte Membrane Biochemistry c.4. 15April 2024. Chemokine gradients and the membrane lipid raft. Authentic Biochemistry Podcast Dr. Daniel J. Guerra

Authentic Biochemistry

Play Episode Listen Later Apr 16, 2024 29:58


References FEBS J. 2018 Aug; 285(16): 2944–2971. Nature Reviews Immunology 2023. volume 23, pages 236–250. Angiogenesis. 2021; 24(4): 719–753. Front. Immunol. 2022 Sec. Microbial Immunology Volume 12 Camp, H. 1964. "Pride of Man". performed by Quicksilver Messenger Service. https://youtu.be/fG6A6G9uzsQ?si=tKMdQMVbpTn2x85D --- Send in a voice message: https://podcasters.spotify.com/pod/show/dr-daniel-j-guerra/message Support this podcast: https://podcasters.spotify.com/pod/show/dr-daniel-j-guerra/support

The Look & Sound of Leadership
Building Consensus – Savvy or Sin?

The Look & Sound of Leadership

Play Episode Listen Later Oct 4, 2023 27:06


A leader, surrounded by people she knows and trusts, can't seem to get traction with them. In conversation with her coach, she discovers an unexpected cause.Tools for teams abound in this episode. The four tools Tom suggests as first steps for helping your team openly discuss ideas are:Say ‘thank you' when people offer an idea;Be curious; don't dismiss ideas or people;Don't take differences personally; it's not about you;Develop your comfort with disagreement.The Gradients of Agreement tool supports the ideas in this episode. Download it for free from our Essential Tools bin along with other communication tools like Sorting & Labeling.Additional tools for team growth mentioned in this episode are the classic Crucial Conversations and Tom's conversation on The cATalyzing Podcast.This episode is in our podcast library in three categories:Communication SkillsLeading TeamsManagement SkillsSpecific episodes that will help you develop your team are:Facilitating Open DialogueHow Teams FightLeadership Behavior in MeetingsPower Tools for Teams: Plus/DeltaQuestions as LeadershipTaming Meetings Our monthly Essential News email provides links to even more resources. You can sign up here.To those of you who post reviews, thank you so much for supporting the show.Let us know how we can support you.Until next month, from Tom and everyone at The Look & Sound of Leadership, be well! And thanks!#podcast #TeamDecisionMaking #Teamwork #Leadership #ConsensusBuilding    #EffectiveTeams  #TeamCommunication #TeamLeadership   

The Sell More Books Show: Book Marketing, Digital Publishing and Kindle News, Tools and Advice

Today's top story is AI Survey Says.... Question of the week is what's the best advice you've heard floating around the author community? Join the Sell More Books Show Afterparty group on Facebook and answer the Question of the Week in the comment section. Be sure to leave us a review on Apple Podcasts.

EVOQ.BIKE Cycling Podcast
Hill Climb/KOM Cycling Training: Anaerobic vs Aerobic Power, Course Gradients, Pacing, Training Plan

EVOQ.BIKE Cycling Podcast

Play Episode Listen Later Jul 5, 2023 12:21


In order to find success in hill climbs, we need both aerobic and anaerobic power. The training needs will differ athlete to athlete, but also based on what the COURSE demands (a 3m Hill Climb or KOM should be ridden differently than a 10 minute Hill Climb or KOM). I hope this video helps parse this out. Shill: If you do need help, I'd recommend you look at the VO2Max link below, and the Over Unders. For anaerobic training, check out the free blog; the workouts are listed there. 00:00 Intro 00:55 Four Biggest Physiological Determinants of Your Success 01:20 Aerobic vs Anaerobic Power 01:34 Anaerobic Power 01:50 Muscular Strength 02:27 Functional Threshold Power 02:59 Should I Train Aerobic or Anaerobic Power for Hill Climbs? 03:24 Three Different Types of Durations 04:49 Watts per KG! Not ALWAYS 05:40 Pacing 05:54 Oxygen Deficit for PACING, shout out to Tom Bell 08:00 Season Planning 11:34 WE NEED YOU to subscribe to get even more World Tour athletes and Olympians! VO2Max Training Pack: https://www.trainingpeaks.com/training-plans/cycling/tp-352658/vo2max-training-pack-with-top-10-workouts VO2Max / Paceline Sim: https://www.trainingpeaks.com/training-plans/cycling/tp-297603/vo2max-and-paceline-sim-build-before-races Anaerobic Training Blog: https://www.evoq.bike/blog/anaerobic-capacity-cycling All Plans: https://www.trainingpeaks.com/coach/evoq-bike#trainingplans Email me for Coaching Options: Brendan@EVOQ.BIKE

Anchored
Anchored Podcast Ep. 225: Devin Olsen on Nymph Fishing, History, Temperature Gradients and More

Anchored

Play Episode Listen Later Jun 21, 2023 120:01


Devin Olsen has been a member of Fly Fishing Team USA since 2006, winning individual bronze and team silver in the 2015 World Fly Fishing Championships in Bosnia. Once a salmon and steelhead biologist in Joseph, Oregon, Devon is a wealth of knowledge. In this episode of Anchored we discuss how he got in to competitive fly fishing, how it's structured, and how he used it to become the angler he is today. We discuss nymph specifics, lach fishing, history, temperature gradients, and more. Speaking of nymphs, we've also recently added a 12-part series featuring Skip Morris to anchored outdoors.com. Skip's latest book, Top 12 Nymphs for Trout Streams, How, When, and Where to Fish Them, focuses on fly-fishing and tying flies, guides you through 12 great nymph-flies and how to catch trout on them in creeks, streams, and rivers. Accompanying a color photograph of each fly, Skip shares about each fly—what it imitates, what it's designed to do, what it does do in the water—and then he tells you how to fish it effectively, when it fishes best, how deep in the water to fish it, and offers the different fishing methods that make it catch fish. He describes those methods plainly, so you can go right out and make them work. He even provides a section that helps you select the right fly for specific fishing conditions and choose the best method for presenting that fly to trout.  Here's the purchase link: https://www.skip-morris-fly-tying.com/top-12-nymphs-for-trout-streams-how-when-and-where-to-fish-them-2nd-edition.html Learn more about your ad choices. Visit megaphone.fm/adchoices

Adafruit Industries
John Park's CircuitPython Parsec: Color Gradients with NeoPixels

Adafruit Industries

Play Episode Listen Later Jun 3, 2023 2:45


#circuitpythonparsec Create RGB color gradients for NeoPixels in CircuitPython Learn about CircuitPython: https://circuitpython.org Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------

ShopTalk » Podcast Feed
564: Render ATL, New Colors Available, Gradients, HDR, and More

ShopTalk » Podcast Feed

Play Episode Listen Later May 8, 2023 47:02


Chris previews a bit of his Render ATL 2023 talk, and then we mouth blog some color ideas, thoughts, and shame you for your non-HD websites.

Frontend First
React email previews and radial gradients

Frontend First

Play Episode Listen Later May 3, 2023 42:02


Sam and Ryan talk about using MJML to design, build and send transactional emails with React directly in the browser. They also chat about how to use Framer Motion to get a CSS radial gradient to follow the mouse cursor and the differences between React state, refs, Motion Values, and external stores.Topics include:0:00 - Intro1:10 - Building in-browser email previews with MJML18:50 - Using radial gradients and Motion Values to build a moving spotlight treatmentLinks:MJMLReact EmailMaizzleSam's Spotlight video on YouTubeSpotlight code recipe

Retraice
Re78: Recap of Gradients and Partial Derivatives (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 12, 2022 29:39


An overview of Re70–Re76. Subscribe at: https://paid.retraice.com Details: Re70; Re71; Re72; Re73; Re74; Re75; Re76. Complete notes and video at: https://www.retraice.com/segments/re78 Air date: Sunday, 11th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 Re70; 0:13:26 Re71; 0:14:27 Re72; 0:15:37 Re73; 0:16:51 Re74; 0:19:15 Re75; 0:21:03 Re76. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
Retraice
Re76: Gradients and Partial Derivatives Part 7 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 11, 2022 23:07


Moving the airport to improve its value. Subscribe at: https://paid.retraice.com Details: two more guesses; the hand-math; the spreadsheet-math. Complete notes and video at: https://www.retraice.com/segments/re76 Air date: Saturday, 10th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 two more guesses; 0:01:50 the hand-math; 0:17:48 the spreadsheet-math. Copyright: 2022 Retraice, Inc. https://retraice.com

Retraice
Re75: Gradients and Partial Derivatives Part 6 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 10, 2022 10:25


Can we please just place an airport? Subscribe at: https://paid.retraice.com Details: a guess; calculating the objective function value; toil and explanation. Complete notes and video at: https://www.retraice.com/segments/re75 Air date: Friday, 9th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 a guess; 0:02:44 calculating the objective function value; 0:06:23 toil and explanation. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
Retraice
Re74: Gradients and Partial Derivatives Part 5 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 9, 2022 17:29


Bringing the algebra back down to numbers. Subscribe at: https://paid.retraice.com Details: two cities; distance; calculating the objective function. Complete notes and video at: https://www.retraice.com/segments/re74 Air date: Thursday, 8th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 two cities; 0:06:33 distance; 0:10:30 calculating the objective function. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
Retraice
Re73: Gradients and Partial Derivatives Part 4 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 8, 2022 20:47


The limits that define our gradient. Subscribe at: https://paid.retraice.com Details: our gradient equation; the partial derivatives. Complete notes and video at: https://www.retraice.com/segments/re73 Air date: Wednesday, 7th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 our gradient equation; 0:13:01 the partial derivatives. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
Retraice
Re72: Gradients and Partial Derivatives Part 3 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 7, 2022 29:19


Be in the math. Subscribe at: https://paid.retraice.com Details: the airport problem; the solution vector; the objective function; the gradient vector; the partial derivative. Complete notes and video at: https://www.retraice.com/segments/re72 Air date: Tuesday, 6th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 the airport problem; 0:04:23 the solution vector; 0:08:44 the objective function; 0:16:18 the gradient vector; 0:21:56 the partial derivative. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
Retraice
Re71: Gradients and Partial Derivatives Part 2 (AIMA4e pp. 119–122)

Retraice

Play Episode Listen Later Dec 6, 2022 31:21


Put the airport problem first. Subscribe at: https://paid.retraice.com Details: the airport toy problem; a pile of numbers. Complete notes and video at: https://www.retraice.com/segments/re71 Air date: Monday, 5th Dec. 2022, 11 : 00 PM Eastern/US. 0:00:00 the airport toy problem; 0:09:32 a pile of numbers. Copyright: 2022 Retraice, Inc. https://retraice.com

air copyright gradients partial derivatives
THE WONDER: Science-Based Paganism

Remember, we welcome comments, questions and suggested topics at thewonderpodcastQs@gmail.com   S3E34 TRANSCRIPT:----more---- Yucca: Welcome back to the Wonder Science-based Paganism. I'm one of your hosts, Yucca, Mark: And I'm the other one. Mark. Yucca: and today we are talking about Cauldrons. Mark: Yeah. Yucca: yes, and welcome to October. We're here all in. The wonderful aut month, the our kind of spooky hollows is coming and here we are. So we're gonna have some great episodes this, this month. Mark: Yeah, I'm really excited about it. We've got a lot of cool stuff to talk about for the witchy month and can't wait to get started. Yucca: Yeah. So speaking of witchy, there's probably three symbols which are most associated with witch broomstick, pointy hat and cauldron. Mark: Right. Yucca: Yeah. Mark: No one will make any mistake about what you are trying to represent. If you've got those three things with you Yucca: Yep. And oh please. Mark: Well, I was gonna say, we don't have enough to say about a pointy hat to turn it into an episode, but there's plenty to talk about with a caldron. Yucca: there is, Yes. So I think a good place to start would probably be, you know, the history. What is a coldron, what's the history and why? Why it really matters, why we're interested in this symbol. Mark: Mm-hmm. well. From my standpoint, I, I think you, you really identified the main reason why we're interested in it. I mean, for those of us that gravitate towards Paganism and it's aesthetic and it's iconography in our ritual practice, those. Those standard symbols, like the cauldron become very potent. They become very influential when, when you're, when you're brewing something over a cauldron, there is very much this sense that you're doing magic, right. Yucca: Yeah. Well, and I, and I think that the association with the witch, a witch is a powerful figure. Right. And they're, they can be represented in different ways in terms of the morality of them in stories, right? Depending on who's telling the story, whether they're, you know, the good guy or the bad guy. But they're always powerful, right? They're always, they have agency. But that agency also usually is coming from them and the home. And the cauldron has this association with the home because it's a tool of the. , whether that's an outdoor kitchen around the fire or whether that was your kitchen in the home at the Hearth. Mark: Right. Yeah. I mean, Among the very earliest implementations of of any kind of cooking equipment that we're familiar with are ceramic pots that were used for cooking. Things in hot stones would be put inside a ceramic pot. And then Cereals or meat or and water or whatever. It could be stirred in that and it would boil which would sterilize it of course, but would also break down proteins in the food to make it easier to digest. And we have evidence of that going back thousands and thousands of years. Yucca: Right. Well, because there's a lot of foods that, There's a lot of plants that you might be digging up that you can't eat. Mark: right. Yucca: Right. It's not gonna, you have to cook them. And so if we were gonna be doing that, then we needed to cook them. Mark: Right, and we've had. Thousands of generations to do the experimentation to figure those things out. I mean, people talk about, you know, indigenous knowledge and indigenous healing. Well, think about all the trial and error that went into figuring that stuff out. It's like, all right, who's gonna eat the mushroom? All right, Bob's gonna eat. Oh, Bob's gone.  Yucca: Okay. Let's remember that measure. Mark: Right, But how did they ever get to the point of feeding the mushroom to reindeer and then gathering their urine? Yucca: Yeah. Mark: I mean, it's just Yucca: Well, I, We Mark: scale of Yucca: time, Yeah. The time we've been around. On the one hand, if you compare us to, Crocodiles, we've barely been around. Right. But compared to an individual human or an individual culture's memory, the, it's so, so long. Mark: Right. Yeah. 200,000 years since we really started developing culture Yucca: Or well human, at least our gen, our genius is older and you could quite, there's a lot of argument to be made that that other humans, not just homo sapiens had. Quite a bit of culture as well, Mark: Well, sure. They had the domestication of fire, which in many cases there are a lot of strong arguments to be made that the domestication of fire was. Kind of the, the launching pad for human culture. In many ways it also coincided with a rapid evolution of our brains because we were getting a lot more food value out of our food once we started cooking it. Yucca: Yeah. Mark: This is a tangent, but Yucca: Well, but we can relate it back though, because Fire and Cauldrons is that right? So we, This was planned, This was planned tangent. We can say Mark: So, yeah, the, the hearth, the, the home fire and the cooking pot sitting over it are very, very ancient symbols of of power of transformation. You know, you put those ingredients in and they, they, they come out different. They come out edible food, they come. Tasting different Yucca: smelling good. Mark: smelling good. There's, there's just all kinds of wonderful things that happen in the, the alchemy of that, that caldron. So historically, and, you know, we know that this has been a symbol for a very long time because it was already a trope when Shakespeare was writing about it. Right. You know, with, with the three witches and the double, double toil and trouble and all that. So now we inherit it today and it's become sort of a stereotype, but at the same time, a caldron is a really useful ritual implement, and we're gonna talk about ways that it, that it is useful for us. Yucca: Right, and we should say, The image that usually comes to mind when you think of a cauldron that rounded three-legged black, you know, big Iron Pot. That's one version of a Coran, right? This is, that's, we're looking at, that's coming from recent European history, but Qurans are much older and there's, you know, they're always kind of a pot shape, but we don't always see them as that round. Belly kind of shape. Sometimes we see other shapes involved. We're talking about that because that's what we associate with the witches and a lot of the kind of witch aesthetic is coming from a European aesthetic, but remembering that cultures all over the world had versions of this. Mark: Yes. Yes. And we should talk about some some variations that exist for the kinds of formats that people might. Experience as a part of you know, selecting a cauldron for themselves. We're in no way saying you need to go out and spend a couple hundred dollars on, on, you know, a pot beed, three-legged iron cauldron. They're out there, they're really cool, but Yucca: if you're into that, we're not gonna judge you on that, but yeah, you certainly don't need to. Mark: Yeah. And if we, and if we do a ritual with you and there it is, we'll go, Hey, wow. Cool. Caldron, Yucca: Yeah. Mark: But my caldron actually is not one of those, It is a Dutch oven that probably dates from the turn of the 20th century. It's got a lot of rust on it that I've never cleaned off because it's. Yucca: Mm-hmm. Mark: And it has a wire bale that I can pick up and a lid. And I've used it in lots of caldron rituals and it's, it still, it still communicates that sense of antiquity. There's something that's lovely about having a lid for it because it's sort of mysterious. You know, you put the lid on and then some, something magical happens inside it. You take the lid off and things have changed. Yucca: I think that's really interesting because I, mine are also Dutch ovens. So mine are very used dutch ovens because I have a wood stove in the home and, and yeah, I have a little propane burner as well for cooking on, but as long as we've got, cuz we do heat with wood in the winter, as long as we've got that going. I love having things up on top of it and you can also stick it into the ashes of the fire. So we've got several different sizes and kind of different shapes there for them. And there's just something about that cast iron, right? Ours are probably are new Dutch ovens. They're probably made within the last few years, but they feel like something that could be around for a very long. Mark: Right, Yucca: They, you know, they could be passed on. My grandkids or great-grandkids could literally be using these. Yeah. Mark: that is the great thing about cast iron is that. It simply doesn't wear out. We use cast iron frying pans in my house and some of them come from thrift chops where they looked hideous. I mean, they're covered with rust and conclusions and just in the worst possible shape. But you get going on, taking all that stuff off, and then Yucca: take that top layer. Yeah. Mark: And it is a perfectly good frying pan once again, and it will be for decades, if not centuries, as long as you keep it from being eaten up by oxidation. Yucca: Yeah. That's what we use all of our, our pans in the kitchen, our, our cast iron, we've got. A couple of stainless steel for boiling, like a pot or kettle stainless steel. But that's, you know, they're just beautiful. And, and some people get very snobby about the exact correct way to treat them and wash them. And, but I think that they're just super forgiving and if you mess up, then you just it, right? You just re season it again. It's great. And enjoy the things you're eating that you're seasoning it with, you know. Mark: Right. And there are some things that you make that will take the seasoning off. Like if you cook a tomato pasta sauce, for example, the, the acids in that may very well take some of the seasoning off the pan. So you put a little oil on, stick it in the oven, heat it up for a while, and you've gotta see some pan. Yucca: Yeah. Mark: So, and, and ode to to cast iron. We're big fans. Yucca: Right. Well, and so going back though to the cauldron, so we were saying that we use our, our cast iron Dutch ovens but there's a lot of Dutch ovens that are not iron. Right. And there's other things that, that would, that serve the same function that we use today. As a coulter would traditionally, So your big crock pots, right? Or your stockpot, right. We've got like this several gallon stockpot that, you know, is what I used to heat up the bath water with. And it's just, it's, it, it has that same vibe, right? And it, it's modern. It was made within the last 20 years probably, but it still does that same function and looks beautiful at the same. Mark: Mm-hmm. One of the things that is great about using a Dutch oven actually be is because they do have a lid. And what that means is that you have a little bit more control over temperature. Gradients. For example, if you've got a Dutch oven that is sitting on the fire or in the coals, the bottom of that is gonna get really hot. But the lid, you could put herbs on that to create a fragrance in your home. Or a little drop of essential oil to do the same thing. There are, if you just want to warm things, I mean, I know you can, you can warm bread and stuff like that on the, on the top of, of a dutch oven as well. So it's a very versatile tool for for a variety of uses. Yucca: and you can also put a fire right into it. Right? You could have your candle or something in that, and then. When you put your lid on afterwards, you can feel pretty secure that you're not, that you're not creating a fire hazard with that. Mark: Right, Yucca: So now it will, your lid will heat up too. So you need to be, be aware of that if you're, you are using it on the stove and, you know, not, not touch that with your bare hands, but it just, it, you could just use it in so many different ways. Mark: right. Right. And there is something about just the sight of that Dutch oven or caldron heating in a fireplace or over a stove that kind of says home and comfort and warmth and and magic, you know, the magic of the kitchen. We were talking before we were recording and I was mentioning that, you know, one of the things about about older times is that, you know, you, your, your medicines didn't come from a factory. They came from your kitchen, you know, and the caldron was a, a key. Tool for creating them. You know, you'd, you'd gather the proper herbs, you'd mash them up in a mortar and pestle, which is another classic alchemical sort of witchy, magical set of tools, and then you would brew them. Yucca: today too, Mark: Oh yeah, yeah. We, we use ours all the time. Yucca: Mm-hmm. Mark: And then, you know, brew them or toast them or, you know, whatever it is in that hot pot. So it's, It's not an accident that a, that domestic tools like the broom and the cauldron are associated with the power of the witch because that kind of ritual magic, if you will, was really the purview of the home. Yucca: Yeah. Mark: That's where it happened. Yucca: Mm-hmm. Mark: Very different than, Oh, go ahead. Yucca: I was gonna say, I still think that, I think that's still where a lot of it does, but in our very busy lives, we kind of forget about that sometimes. We're off running around, but when we come back, back home, back to center, then we go, Oh, I actually do have a lot of power from this place. Mark: Mm-hmm. . Yes. So, We've established that this is something that has been a symbol for a very long time, and it's been a, a useful tool for humans even going back into very, very ancient times. I'm sure we were heating things on hot stones long before we, you know, invented pottery or any of that kind of stuff. Yucca: Right. But as long as we've been in the neolithic. We've had something of the sort, right? Every, everybody who's doing that, who's doing the, the whole staying in one place thing, and even nomadic peoples as well could have things that they were, you know, packing up and bringing with them. Yeah. Mark: right. And we've established that cast iron is good. Yucca: yes. Yay for cast iron. Mark: Big fans of cast iron. Why don't we talk a little bit about the kinds of ritual things that you can do with a caldron Yucca: Hmm. Okay. Well I think we could start with the incorporating what you would be doing with it to begin with, just on a mundane level and adding some ritual and meaning into that. So in this case it, it might be your Dutch oven, but it also might be your stockpot on the stove. Right. What are you doing and why are you doing that? Right? So can you add something, Can you have a, a moment when you add in that salt or whatever it is that you're adding in, that you, that you take a moment and have just set an intention with that, right? Mark: Yeah, the adding of seasoning and spices I think is a great opportunity for metaphorically adding magic into whatever it is that you're cooking. Spices are. Spices are kind of magical substances when you think about it. I mean, they are the unique pesticides that various plants have evolved in order to defend themselves from insects mostly. And in some cases from fungal infections and stuff like that. Yucca: and small mammals and Mark: Sure, yeah. If they, Yucca: And us too. It's just, we're so big , right? They're, they're technically poisons, right? They're toxins that they produce because they don't wanna be eaten every, everybody wants to survive and reproduce and they can't get up and run, run away the way an animal can or bite you, but they can make themselves poisonous. Mark: Yes. And they can make themselves taste bad, but Yucca: But we ended up liking Mark: amounts, yes. In small amounts. You're, you're a regno and your terragon and your sage and your onions, and. Yucca: Yeah. Mark: All those wonderful things. Garlic, I mean, they, they give us wonderful, good feelings and very complex flavors that give us a lot of pleasure. So when casting those things into a cooking pot, we can be setting intentions, we can be stirring them in as meaning, you know,  Yucca: It would be lovely if you made your own labels and added them to the spice jars. Maybe not covering up what they are. If you need to know which is, which is your cayenne and which is your cinnamon, you wanna know the difference, right? But if you put your label on that, you know, Oh, well this one is love, right? And this one is creativity. You know, when you're putting in your love and creativity and all of those things that you see that every time. Reach for that spice jar. Mark: I love that idea. That's a great idea. And it would be a really fun project actually, to do with kids to create the labels. Yucca: Yeah. And you could do, You could put them on in ritual too. Mark: Right? Right. Yucca: And even, No even grown up kids. Right. Mark: Oh yeah. I. Yucca: kids of whatever ages. Mark: I would want to be a part of it for sure. Yucca: Yeah. Mark: So we can do caldron magic in the course of just using the caldron for the purpose, for an ordinary cooking purpose. Yucca: Mm-hmm. Mark: We can also dispense with anything in the cauldron except fire. We can, we can burn. We can burn fire, burn wood, or you know, whatever it is that don't burn anything toxic because then you're not gonna want to use it for cooking ever again. Yucca: and you wanna be able to be around. You don't wanna breathe and smoke in general, but you wanna be really careful about what it is that you're burning. So you don't wanna be burning like synthetic fabrics or something like that, that really could be very toxic to you. If you get a little wolf of whiff of wood smoke, it's not great, but you know, it's, it's not gonna be quite as much of an issue as burning plastics. Mark: Right, right. Yeah. So, a flaming caldron is something that we, I've used many times in rituals and you can, you can feed stuff that you want to destroy or dispense with in the form of. Little pieces of wood that you've invested your intention on or written the message on what you mean. You can do that with slips of paper. You can do that with Little symbols that are flammable of, of some kind. So that's sort of the destructive approach to a flaming cauldron. But you can also do it with wishes. You can inscribe something hope hoped for, that you want to, The smoke will go up into the sky and inform whatever powers are up there and, and they'll put in an order for you. Yucca: Or thinking of it as this is fuel, right? This is, this is the fuel for the fire. That, that whatever it is burning inside of you, right? What is it that you want to feed into your fire to, for you to continue to grow and do all of these, you know, passionate, wonderful things, whatever it is that you are focused on. Mark: Right, And in the case of a ritual like that, I really encourage people to use low tech methods of actually lighting the fire. So that it, it takes a little effort, right? You know, whether that's a flint and steel or I, I don't recommend lighting a fire with a bow because it's an incredible amount of work. And it, you can have disappointing results while you're trying to light your inspiration. Fire. Yucca: Yeah. Well if, if you do, you might wanna practice that ahead of time and be, and get really good at it. Right. Just knowing that it is a skill that takes a lot of work. Mark: Yes. Yucca: Yeah. Mark: But there is, there is something to be said to something more than just flicking a lighter and . Suddenly there is flame. Yucca: Yeah. Well, and it, and you know, if you don't have access to one of those matches, right? There's something more, I, I find there's something very satisfying about striking the match as opposed to just the lighter. Although there are some really cool lighters. We were given one of those arc lighters. Mark: I have one I use it for, for my alter, my focus all the time. Yucca: Yeah, I feel so sci-fi, whenever I use Mark: Yeah. Yucca: like, yeah. It's just really nice and it's USB chargeable, so we just like plug it in and don't have to, I've got lots of lighters and matches all over the place because I don't wanna ever. Want to be lighting a fire and be shivering and being like, Where are my matches? Where are my lighters? But those are fun, but you know, there's matches. And there's also, I don't know what they're actually called, but you know, the ones we'd use in lab class for bunsen burners? The, Mark: Oh, those little pizza, electric things that, Yucca: Yeah, there's silver and you Mark: spark. Yucca: Yeah. Those are, you know, when you have a more. Just an out of the ordinary or kind of fun way of starting the fire. There's a little something extra to it. Mark: Right, right. There are these striker, they're, they're sort of like flint and steel. They're these sort of striker sticks that you scrape sparks off of onto like cotton or something, which will light on fire. And those are pretty neat for starting a fire too. I don't know what they're called exactly either, but they're you can get them in camping stores. Yucca: Okay. Mark: add to a survival Yucca: Oh, I think I've seen them and they, You can like put them on a key chain or something like that. Yeah, Yeah. Now you gotta be patient with anything like that that doesn't have a sustained flame because you're trying to catch that. Spark, Mark: Yeah. Yucca: like if you have like a little cotton swab from the bathroom, like those are really good and you maybe half of it, you dip into olive oil and the other half you leave open so that then it starts to burn the oil. And there's a lot of, that's another thing that you could do fire related is little fat lamps, little fat, an oil lamps. Those are really fun. Mark: Right. Yucca: This year the kids and I So they're, they're softa. So my stepmother lives up on our, where we do as well and is really into finding the, the clay here and fire making things and firing it. So they made little oil lamps. Yeah, so they made little oil lamps and we've been using lard in them and they worked remarkably well and doesn't smell like a fast food restaurant. I was very happy for that. Mark: That's amazing. Yeah, we've used NAEA uses Tao quite a bit in cooking and Yucca: how, Mark: Yeah, so we've, we've, I've used that sometimes as sort of an accelerant for a fire to get started, but, okay, so that's the fire inside the cauldron. That's one whole set of things you can do. Yucca: Mm-hmm. Mark: Then there's the adding ingredients into the cauldron kind of. The, the classic example of that is stone soup, where everybody brings an ingredient and you start with water or stock. Could be vegetable stock, could be chicken, Yucca: Mm-hmm. , b flam, whatever you have Mark: Whatever you Yucca: and whatever matches your, your dietary approaches. Yeah. Mark: Right. And then people add ingredients and the whole thing becomes soup. Which. Is a lot more satisfying than it sounds. There is, there is really something wonderful about the kind of ceremonial, adding by a whole lot of different people of what they in particular have brought to add to a given dish. And then it's all put together, it's cooked, and then it's distributed out to everyone to enjoy. There's something very poetic about that, that process. Yucca: Yeah. Hmm. Mark: And then you can also do sort of magical potions, which aren't meant to be ingested, Yucca: Right. Mark: With whatever ingredients you feel are necessary. Now, bear in mind, cast iron is a little bit porous, Yucca: Yeah. So if you're gonna eat from it again, you don't wanna be putting non edible things in there, Mark: right? Right. You know, no Mercury Yucca: Yeah. Or I, I don't know why this one's coming to mind, but shampoo. Right, because shampoo, like there's really good smelling shampoos that'll bubble up really nicely. Like you could do some really kind of fun smelling and looking things with, with soap shampoos and soaps and stuff like that. But you don't want, you don't want that in your mouth. Mark: No. Yucca: And that's gonna spoil whatever you try and cook in there next. Right? If you get it out cuz you, you're not feeling well and you need that good soup, you know, And then, Oh, shampoo soup. Mark: it's, this is Lemon Sented shampoo. Oh, dear. Yucca: Yeah. But if it's one that you are using only for ritual and decorative purposes, that's very different. Mark: Right? Yucca: Right. Mark: Yeah.  Yucca: I suppose you could put line it with foil or something like that, but it's kind of taken a risk. Mark: You know, if you really want a sort of bubbly, frosty effect I would just go for the dry ice, you know, put a little little layer of water in the bottom of the cauldron set in a block of dry ice. You'll get abundant fog pouring out of it. It'll look really cool. If you want to change the color, you can break a light stick and drop it in there. So that you've got like a green fog coming out or, Yucca: but that you cannot use for food again. Mark: Oh, I. Yucca: a light stick. Mark: I didn't mean to Yucca: Oh, good. Okay. You mean snap it so it activates? Mark: it so it activates Yeah. And drop it in there. Yucca: Well, and with the dry ice, there's nothing to clean up afterwards, which is really nice. Right. If when it come, it billows out, you know, might get things, you know, little damp, but not, you know, you're not gonna have to be mopping anything or cleaning anything up. Mark: right. Be sure you've got good ventilation. Yucca: Yes. Mark: Dry ice is co2. CO2 is poisonous. That's why we breathe it out because we don't use it. Yucca: Yeah. Mark: you just wanna make sure that you've got good ventilation in the room so that you don't get overcome by CO2 and pass out. Yucca: Right, Because if we, I mean, we breathe CO2 in and breathe it back out, but the problem is it's not oxygen. It isn't the same as carbon monoxide, which is really problematic for us because our bodies confuses that with oxygen and then it basically makes us suffocate. But co2, Yeah. That sort of thing you might wanna be doing either outside or with making sure you have the windows open, but yeah. And also when you're doing, going back to the fire, one being mindful about what size is your flame going to be, Right. If you're lighting a little candle inside of your little cauldron, The kitchen, you're probably fine, but if you're pouring something in Mark, you have a, Don't you have a story about a Mark: Oh yes, Yucca: flame that came out Mark: the flame vortex. Yucca: Yeah. That you wanna be outside for, with, you know, appropriate fire or safety equipment. Yeah. Go. So what happens with your Mark: Well, what What happened was we did a ritual where we burned some intentions for the coming year, and the caldron was sitting on top of. Coals and there was still some flame there. So the bottom of the, the cauldron was very warm. And what we did was afterwards we poured in two bottles simultaneously, two bottles of cheap red wine. And it was hot enough that the wine boiled on contact with the bottom of the pan, which we assumed was going to happen for the first little bit that we poured in. And then, Yucca: you gonna make mold wine or something? Is the Okay? Mark: Yes. And, and mold wide, which included the ashes of the Yucca: beautiful. Mm-hmm. Mark: had, you know, been. Been burning there, and then we could all have a sip. Well, what ended up happening was that the entire pot boiled, it boiled off the alcohol and the alcohol lit on fire, and created this sort of fire tornado that extended up maybe three feet above the, the lid of the, or the edge of the cauldron. And it did that for about 20 seconds. So what we ended up drinking had no alcohol in it for one thing, and it wasn't particularly tasty because it had been boiled also. But it's a pretty cool effect if you, if you wanna do that again, it just don't do it indoors. Yucca: Do it outdoors to have all of your, you know, your fire extinguisher or whatever you need Yeah. To put it out. Right. And maybe not, you know. Not near a bunch of, you know, brush and all of that. Mark: Yeah. Or overhanging branches, which is the thing that people often forget because the picture in their mind is of a fire that is, you know, a nice contained fire that only leaps up about a foot above whatever the container is. But sometimes fires get a mind of their own and they, they get bigger than that and then they can start to. The, the tree branches that are over the top. So you need to be, you need to be careful with fire, Yucca: Yeah. And you know, whatever the safety is in your area, check, check with your county regulations. Is there a fire ban on at the moment and all of that because you don't wanna burn your, your neighborhood down. So yeah, Mark: Yeah. Yucca: of those, those interesting. We have this lovely, beautiful relationship with it spanning back literally millions of years, but it's also extremely destructive. Mark: It's very dangerous. The fact that we were able to domesticate this incredibly dangerous chemical process is really a testament to courage in our, in our ancestry, honestly, because when we first got it, it was probably burning trees that have been struck by lightning Yucca: Mm-hmm. Mark: and you know, I would think you probably wouldn't wanna go near a tree that had been struck by lightning in case it got struck again. Right. Yucca: Yeah, and it's still, you know, can still be hot. The, the kids and I are reading some Greek mythology right now and we actually just were reading about Prometheus and my oldest asked, Well, mom, why was Sue so mad about fire? What's the big deal about giving humans fire? When we had to go through all the things that fire can do, how powerful Mark: Mm-hmm. Yucca: it made people, they went, Oh, okay. Still doesn't seem like a fair consequence. Mark: Well, yeah, e Eternal torment never seems like a fair consequence. . So, yeah. Yucca: they were very sympathetic to poor Prometheus, so yeah. Mark: So, the last kind of ritual that I can think of is the kind of potion making where. Where you're, you're mixing something up, which you're then going to pour off into jars or into, you know, like if you're making spell jars for example, and there's particular ingredients that you want in all of them. So you mix up sort of a, a formula of what all those different elements are, and then you can pour them off into jars and maybe add material items before closing them and sealing them. Yucca: What would be an example of a type of, of ritual that you would do with one of these s. Mark: I haven't done a whole lot of spell jar rituals myself, but I know of people that have done like spell jar protection symbols for their, for their land, Yucca: So they would bury it in the four corners or. Mark: Right. Yeah. Bury those, you know, at the boundaries in order to, well, realistically speaking in order to help them feel more protected.  Yucca: Well, that's the point of the ritual, right? Mark: that's the point of the ritual. Exactly. I mean, many of the magical rituals that have been implemented over human history have been to try to get control over stuff that we don't have control. Yucca: Mm-hmm. Mark: It just helps us feel better and that's fine. There's, there's nothing wrong with that. There's, it's absolutely a great thing to do. So, for example, if you had You know, water from a particular well and maybe some river water and some ocean water and some wine and some, I don't know. I'm trying to think of, you know, a few drops of blood. Whatever you wanted to put in there. You could stir all that up together. Add in whatever other. Miscellaneous ingredients felt like the right thing to do and then could decant out of the caldron. But you, you get to do that big stirring motion on the caldron, right? That, that wonderful double, double toil and trouble kind of thing. And so you can chant over it, you can sing over it, you can you can do that solo or you can do that with a group. Everybody can get a turn to do the stirring. I've seen that before. And then you pour off into the jars and put in items. I, I know that historically spell jars have been found that are full of nails, Yucca: Okay. Mark: that are sort of meant to protect against stuff, right? Put these sharp objects in to protect people from from what they don't want to contend with. Yucca: Well, brainstorming as, as you were talking about that everybody putting something in. Maybe one thing you could do is if you're with a group or you could do it on your own, having a, a jar that you're preparing for later when you're having a hard time, Mark: Mm-hmm. Yucca: the, oh, you know, here's the, all the, the friendship and joy and, and sense of connection and, you know, there's gonna be a day when I'm feeling alone and I need to, to open that up. To remember that, you know, I have this connection and this appreciation for the community or, or a day where, where you put patients into the jar. So when you're all out of patience, you can, you have a jar, patience stored on that back shelf that you can open up, right? Mark: Mm-hmm. Yucca: Things like that. Mark: Yeah. You could pour what's in there as a libation for a, a plant or just onto the earth as a way of releasing its power. And then you have a jar that you can refill again and do another spell with, I have patients in knots. Yucca: Ah Mark: so when I really need it, I can untie one of the knots on my patient's string and let some patients out. Yucca: hm. Mark: It at least gives me something to do other than reacting angrily in the, in the immediate term, cuz the knots are pretty tight, so it takes a while to get 'em undone. Yucca: Mm-hmm. . And do you have a time when you go back through and retie everything Mark: I haven't had to do that yet. I think I've got four or five knots left on my, on my patient's string. But yeah, we did that in the, in a ritual of the Saturday morning mixer, Atheopagan mixer that we do on Zoom. So. I found it useful. I've actually used it twice but I'm sure there will come a time when it's empty and I've gotta refill it. Yucca: Yeah. Hmm. Well, these have been, these have been fun to think about different ideas to do with Colton, and of course there's, you know, there's so many more that we didn't mention.  Mark: Right. Yeah. The, the wonderful thing about having a, a ritual practice is that it's re it's everything that your imagination can come up with. Yucca: Mm-hmm. Mark: And of course, we like to swap our ideas so that we can take advantage of others imagination as well. And I hope that some of the ideas that we've talked about here today are helpful to you. But if you don't have some kind of a. Big cooking receptacle really encourage you to, to consider adding that to your magical tools. It's it, it really is a, a very useful thing both for individual work and for group rituals. Yucca: Right. And beautiful. Mark: Mm-hmm. Yucca: Right? Depending on your style, I know some people like to. Put their, their ritual tools away and wrap them in the beautiful cloths and things like that. And, and some people like to have them out on display because they like looking at them and they make them feel good when they see it. So it's both completely valid approaches. It just depends on what, what works for you. Mark: Right, Right. Yeah. So there you have. Caldron in non FIAs pagan practice. Pretty cool. Yucca: Yeah, Mark: I'm so glad it's October. Yucca: me too. Well, thank you for another great discussion and we will be back to see or talk with all of you next week Mark: Yeah, thanks everybody. Yucca: I believe. Mark: Oh yes. Talking about death. Yucca: Yes, it's October, Mark: Gotta do it. Yucca: All right. Thanks everyone. Mark: Bye bye.

Bill Hartman's Coaching Conversation
The Bill Hartman Podcast for The 16% - Season 14 - Number 3

Bill Hartman's Coaching Conversation

Play Episode Listen Later Aug 28, 2022 45:12


This week's topics:Gradients, Positions, Compression, ExpansionBias, Models, and Systems as LimitationsSquat, Structure, and Movement PrinciplesSupine Cross Connect Don'tsClinical Reasoning

Jesse Nyberg Podcast
Paulina Almira - Artist Mastering Gradients & Creating Surreal Worlds Ep.77

Jesse Nyberg Podcast

Play Episode Listen Later Jun 9, 2022 53:40


Paulina Almira is my guest today on The Jesse Nyberg Podcast. Paulina is a talented illustrator from the Philippines and creates some of my favorite work. Her work is filled with magical creatures, beautiful gradients, and amazing color palettes. In this episode, Paulina and I chat about working with big clients like Nike, Why she likes making art with Eyeballs and switching up your style from time to time. Paulina Links: https://paulinaalmira.carrd.co/ If you want more content or just want to support the podcast/channel then check out: https://www.patreon.com/Jessenyberg

Machine Learning Street Talk
#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data

Machine Learning Street Talk

Play Episode Listen Later Mar 12, 2022 50:38


Today we are speaking with Dr. Thomas Lux, a research scientist at Meta in Silicon Valley. In some sense, all of supervised machine learning can be framed through the lens of geometry. All training data exists as points in euclidean space, and we want to predict the value of a function at all those points. Neural networks appear to be the modus operandi these days for many domains of prediction. In that light; we might ask ourselves — what makes neural networks better than classical techniques like K nearest neighbour from a geometric perspective. Our guest today has done research on exactly that problem, trying to define error bounds for approximations in terms of directions, distances, and derivatives. The insights from Thomas's work point at why neural networks are so good at problems which everything else fails at, like image recognition. The key is in their ability to ignore parts of the input space, do nonlinear dimension reduction, and concentrate their approximation power on important parts of the function. [00:00:00] Intro to Show [00:04:11] Intro to Thomas (Main show kick off) [00:04:56] Interpolation of Sparse High-Dimensional Data [00:12:19] Where does one place the basis functions to partition the space, the perennial question [00:16:20] The sampling phenomenon -- where did all those dimensions come from? [00:17:40] The placement of the MLP basis functions, they are not where you think they are [00:23:15] NNs only extrapolate when given explicit priors to do so, CNNs in the translation domain [00:25:31] Transformers extrapolate in the permutation domain [00:28:26] NN priors work by creating space junk everywhere [00:36:44] Are vector spaces the way to go? On discrete problems [00:40:23] Activation functioms [00:45:57] What can we prove about NNs? Gradients without backprop Interpolation of Sparse High-Dimensional Data [Lux] https://tchlux.github.io/papers/tchlux-2020-NUMA.pdf A Spline Theory of Deep Learning [_Balestriero_] https://proceedings.mlr.press/v80/balestriero18b.html Gradients without Backpropagation ‘22 https://arxiv.org/pdf/2202.08587.pdf

From the Ground Up Athletic Performance Podcast
Ryan Foley & Kyle Paxton IKN Episode 44 " Examining movement in light of force distribution, Proximal & Distal Considerations, Control Gradients to inform isometric & Joint position"

From the Ground Up Athletic Performance Podcast

Play Episode Listen Later Mar 1, 2022 69:40


On this episode I sat down with the founders of Integrated Kinetic Neurology Ryan Foley and Kyle Paxton. We began the discussion by talking about the body being an integrated learning system, Ryan also shares that we must view movement within a given context in order to make worthwhile reductions about movement strategies. Proximal and distal movement strategies are discussed in multiple formats throughout the conversation and Ryan shares that we develop movement proficiencies in a proximal nature before begin to distribute movements distally. As we develop we become more driven by distal drivers except for in certain situations. The anatomy of the body is taken into consideration to support the distal distribution of forces and Ryan and Kyle share about limb tapering and why we are arranged in a strategic manner morphologically to allow for distal distribution of forces. Intensity is discussed in multiple facets throughout the discussion and the differences of movement strategies in a low intensity setting differ substantially from the choices available under time constraints. Often times in lower intensity situations individuals may exhibit movement strategies that would be more appropriate for high intensity situations. One can make inferences that this would be a dangerous and expensive movement strategy. Attractors and Fluctuations are ways to offer meaningful opportunities for individuals to learn. Ryan shares that sometimes its not about learning new strategies sometimes its about destroying certain strategies. One of the main attractors that should be given major consideration within rehab and training is the capacity of tissue to buffer and dampen forces and load tissue appropriately. Muscle tone is discussed and there are two perspectives that this can be viewed from the protective mechanism or the performance mechanism. We discuss feedforward verses feedback strategies and discuss the role of vestibular and visual system in providing appropriate models of internal estimation. Feed forward strategies are strategies that allow the expression of certain amounts of activity before the foot hits the ground and allows for a better overall distribution of forces. One strategy is not superior proper preparation seems to really push for a better overall integration of the two strategies. We end the conversation by talking about the concept of neuromechanical control gradients and how that may allow for us to make more meaningful choices in isometrics and joint angles for given movements. There are 4 lens in which we can view control gradients, 1) Muscular 2) Neural 3) Joint 4) Tension. From a muscular perspective Distal tissues need to work more in an isometric fashion, From a neural lens there is variability in the "highways" that move to proximal and distal portions of the body. As far as joints are concerned skeletal organization acts as a constraint to make it easier for the nervous system to distribute forces. The body's structure often flows from complex to simple to complex, there must be some form of simplicity for a complex system to be be controllable. Tension is the last lens we discuss and that could most closely be tied to muscular orientation within the context of a movement. IKN Insta IKN Webpage Ryan Foley Insta Kyle Paxton Insta

That's Nifty
Carlos Marcial

That's Nifty

Play Episode Listen Later Feb 28, 2022 96:20


On the 59th episode of “That's Nifty” we sat down with Carlos Marcial, a crypto native that has been tokenizing his work for 2.5 years. His energy, outlook, and perspective on crypto, DeFi, and NFTs brings a much-needed positive spin on where we might be heading.Carlos MarcialTwitter: @carlosmarcialtWebsite: https://superrare.com/carlosmarcialt/creationsTopics:ENERGY, Larry and Tommy, Crypto Native, Film Director, All roads lead to Silk, Chinese Connections, Macro BTC View, Full Time Crypto Artist, Sovereignty, Designer to Artist, Gradients of Decentralization, Sci-Fi Films and VFX, Geriatric Millennial, Exponential Growth of Homo Sapiens, Infinite Rooms, LOOPS, Writing on Stone: Blockchain, Minting my mistakes, Trying out every platform, SuperRare: Space Race winner, Contemporary Art, Validation, Auction Houses, Sculptures with embedded NFTs, Lambo Carlos, Crypto allows people in Developing Countries to find stability, Decentralizing Culture even within the US, Technology, Poverty, Ecology, Cool Guy Carlos: Giving Advice, Whos' Who, Documenting History, Blue Streak, MetaVRse Exhibition: 0x SocietyMentions:@beeple @aeforiadesign @osinachiart @MattKaneArtist @0x_society @aeforiasdad

COMPRESSEDfm
40 | Design Trends for 2022

COMPRESSEDfm

Play Episode Listen Later Dec 21, 2021 45:56


In this episode, Amy and James discuss design trends to look forward to in 2022, including gradients with grain, large typography, and interactivity.SponsorsVercelVercel combines the best developer experience with an obsessive focus on end-user performance. Their platform enables frontend teams to do their best work. It is the best place to deploy any frontend app. Start by deploying with zero configuration to their global edge network. Scale dynamically to millions of pages without breaking a sweat.For more information, visit Vercel.comZEAL is hiring!ZEAL is a computer software agency that delivers “the world's most zealous” and custom solutions. The company plans and develops web and mobile applications that consistently help clients draw in customers, foster engagement, scale technologies, and ensure delivery.ZEAL believes that a business is “only as strong as” its team and cares about culture, values, a transparent process, leveling up, giving back, and providing excellent equipment. The company has staffers distributed throughout the United States, and as it continues to grow, ZEAL looks for collaborative, object-oriented, and organized individuals to apply for open roles.For more information visit softwareresidency.com/careersTellaIt's 2021 and we all basically live on video. Tella is a browser-based screen recorder for making videos that showcase your work and share your knowledge. You can record your screen, camera, and present slides. And then you can also customize your videos with backgrounds, layouts, and other video clips. When you're done, share your video anywhere on the web, instantly. For more information visit tella.tvShow Notes0:00 IntroductionArticle on Webflow: Web Design Trends in 20228:35 #1 Mini-Sites of Delight10:31 Sponsor: Vercel11:38 #2 App Like Experiences12:48 #3 Art Deco13:03 #4 Line Work16:46 #5 and #6 Fewer Images and Oversized Typography18:02 #7 Interactivity18:23 #8 Collages and Abstract Illustrations19:30 #9 Gradients with GrainCharli Marie Podcast, Inside Marketing Design at Stripe21:09 #10 Glass Morphism22:13 Sponsor - Tella.tv23:26 #11 Scrolling Animations25:00 #12 Less Neo Morphism28:13 #13 Inclusive Copy28:19 #14 Gender-neutral Design28:30 #15 Page Speed Prioritization31:24 #16 No Code32:30 #17 More Emphasis on Users32:54 Sponsor: ZEAL33:49 Grab Bag Questions34:00 Grab Bag Question #1: Will we avoid new trends that end with morphism?37:21 Grab Bag Question #2: Trends for Feature Discovery39:02 Grab Bag Question #3: Minimalistic Design42:00 Picks and Plugs42:11 Amy's Pick: Unstable Unicorns42:49 Amy's Plug: Advent of CSS43:25 James's Pick: Mistborn Series44:43 James's Plug: Advent of JavaScript