POPULARITY
Categories
Pure NFL greed full 634 Fri, 13 Feb 2026 16:17:40 +0000 ewz7gKU7JQ3Svo5ATBsu5VAgTPjGADq6 nfl,society & culture Cody & Gold nfl,society & culture Pure NFL greed Hosts Cody Tapp & Alex Gold team up for 610 Sports Radio's newest mid-day show "Cody & Gold." Two born & raised Kansas Citians, Cody & Gold have been through all the highs and lows as a KC sports fan and they know the passion Kansas City has for their sports teams."Cody & Gold" will be a show focused on smart, sports conversation with the best voices from KC and around the country. It will also feature our listeners with your calls, texts & tweets as we want you to be a part of the show, not just a listener. Cody & Gold, weekdays 10a-2p on 610 Sports Radio. 2024 © 2021 Audacy, Inc. Society & Culture False https://player.amperwavepodcasting.com?feed-link=https%3A%2F%2Frss.amperwave.n
The boys are BACK talking Beanpot aka Deanpot, Hagens + Deano, McAvoy response to Department of player safety, Olympics, 1980 doc ++ PLENTY more . Make sure to follow us on twitter @OnlyBruinsPod @DowntownBoosy2 @BrettHoward_ @BobbieBrewski. Follow us on tiktok @onlybruinsFollow us on instagram @OnlyBruins_Follow us on Youtube @OnlybruinspodcastMake sure to check out our Pure hockey link and get the best hockey gear out there! https://alnk.to/bisa9vc
L'invisibile dello spazio ha qualcosa di misterioso, affascinante a volte realmente surreale con dimensioni, distanze ed energie che vanno ben oltre i parametri che regolano la vita sulla terra e a volte anche oltre le regole della fisica stessa così come la intendiamo. Una terra incognita – lo spazio – a cui da millenni l'essere umano si ispira per cercare di dare un senso e un ordine alla propria esistenza.Il fascino dell'universo nasce probabilmente da questo intreccio profondo tra curiosità scientifica, bisogno di significato ed emozioni umane ancestrali. E in questo fascino ci immergiamo in questo “Laser” insieme a Valentina Tamburello, astrofisica e ricercatrice all'Università di Zurigo, da anni impegnata in diverse collaborazioni tra cui ultima con l'ESA, l'Ente Spaziale Europeo.Che cos'è la materia oscura, che cosa sono i buchi neri e perché sono fenomeni tanto estremi? E le sonde Voyager ai limiti della nostra galassia o il telescopio James Webb, vero e proprio gioiello tecnologico che da tre anni permette di vedere l'universo come mai l'avevamo visto prima.Ma angoli misteriosi e inesplorati di universo li portiamo anche in noi stessi. Chi non conosce la sensazione di avere già vissuto qualcosa o quel senso di appartenenza e amore universale che sente chi ha vissuto un'esperienza di pre-morte, medita o assume alcune sostanze psichedeliche? Pure autosuggestioni e allucinazioni o fenomeni reali che sempre più trovano spiegazione nella fisica quantistica? Anche di questo parliamo con l'astrofisica Valentina Tamburello dell'Università di Zurigo.
Raw... Pure... Techno
George Orwell spoke bluntly about the nefarious nature of advertising, calling it “the rattling of a stick inside a swill bucket.”Even Orwell, though, would've been astonished by the cacophony of swill bucket advertising currently being blasted at us by Amazon, Google, Meta, and other profiteering tech giants. What are they trying to sell?Pure hogwash. Having spent billions to develop artificial intelligence so humanoid robots can displace workers, the tech geniuses are now rushing to build thousands of vast computer data centers necessary to power their Brave New AI World. Each center wills suck up local water supplies, drastically raise people's utility bills, create monstrous industrial blight and pollution, and enthrone such autocratic thugs as Bezos, Musk, and Zuckerberg as absentee bosses with domineering power over each locality.But the billionaires forgot something: You and me. “We the People” are in open rebellion against this Orwellian future, with officials in multiple states and localities “Just Saying Hell No” to the profiteers' invasive scams.Thus, the billionaire hucksters are frantically rattling their swill sticks. For example, Mark Zuckerberg – whose Meta goliath already operates 26 massive data centers and is now spending $600 billion to plop more of them in our communities – has launched a multimillion-dollar offensive to beat back local opponents. It's running BS television ads in state capitol cities, financing political candidates to hype the data centers, deploying untold numbers of lobbyists to rig the rules against opponents, and hiring an army of “community affairs” agents to spread AI propaganda.The swill bucket brigade has the fat cats, but a groundswell of us alley cats that has them on the run. To get involved, go to mediajustice.org/tools.Do something!The Center for Media Justice has been leading the way in fighting data centers in lots of communities around the country— here's how they beat back one in Amarillo, TX, for example. Get involved at mediajustice.org!Jim Hightower's Lowdown is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit jimhightower.substack.com/subscribe
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
The quality of Swadisthan on the right side is creativity, i.e. truly inspired thoughts, ideas and actions. The quality of Swadhistan on the left side is pure knowledge, i.e. the truly discerning and discriminating power to see the innate nature of things at a new stage in our awareness called vibrational awareness.
What God Calls Pure Christianity (And Why It's So Uncomfortable) What if much of what we call Christianity… God wouldn't call pure? In this powerful episode of The Rob Skinner Podcast, Rob explores the simple yet challenging definition of real faith found in James 1:27. When church culture, traditions, and labels are stripped away, Scripture reveals a clear picture of what God considers pure and faultless religion: Loving the vulnerable and living with a clean heart. Rob unpacks how early Christians transformed the world through radical compassion—serving the sick, the poor, and the forgotten—even at great personal cost. But James' message doesn't stop with outward action. True faith also requires guarding our inner life from the quiet spiritual pollution that can dull our devotion. This episode is a call to live with: Radical compassion toward those in need Personal purity in a culture that pulls us away from God Balanced faith that is both outwardly loving and inwardly clean Through personal stories, ministry reflections, and practical challenges, Rob invites you to examine: Who around you needs protection, encouragement, or help What habits or influences may be quietly polluting your heart How to live the kind of faith that God calls pure and faultless If you want a simple definition of a 10X Christian, this might be it: Love radically. Live purely. And that kind of faith doesn't just change you— it changes the world.
Send a textSinger, songwriter and producer of R&B and Funk. It's Curt Jones!
Send a text→ Stay Connected Instagram: https://www.instagram.com/lifechurchuk/Facebook: https://www.facebook.com/lifechurchfolkestoneYoutube: https://www.youtube.com/@lifechurchuk1Instagram: https://www.instagram.com/robertmaasbach/Facebook: https://www.facebook.com/robertmaasbach/→ Give It's the generosity of many that enable Life Church to fulfil all that God has called us to do https://www.lifechurchuk.org/give/→ New to Life Church?If you're new we would love to get in touch and connect with youhttps://lifechurchuk.org/new-to-life-church/
Is "entertainment-first" content killing your Pinterest growth? Fun videos and memes might get views, but do they drive saves, clicks, and sales for busy mom entrepreneurs? We'll discuss that in this video as well as what works best for Pinterest. Pinterest Marketing for Beginners. Pinterest strategy. FACEBOOK GROUPAUDIT PAGE
Czabe delivers a massive triple-header today! First, he weighs in on the pros/cons of Olympic legend Lindsay Vonn deciding to ski on a torn ACL, and then break her leg crashing while trying. Also, we get a consult from Dr. BRIAN KOCH an actual orthopedic surgeon on the situation. Czabe shares a snippet of his weekly Monday chit-chat with Scott and Solly on *their* pod, then MATT MUELLER swings by to discuss all the good, bad, and the whaaaaa? of the Super Bowl ad slate.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The new Babolat Pure Aero is here, and after going beyond the initial playtest, we have a lot more to say. In this episode, we dive deep into what makes the latest Pure Aero update such a positive step forward and why it's winning us over on court. If you've ever loved the Pure Aero but wanted more feel, confidence, or control without losing that signature spin, this conversation is for you. We break down how the updates actually translate to real match play—and who we think will benefit most from this generation. As tennis gear specialists who hit with racquets daily, we move past spec sheets and marketing claims to talk honestly about what stood out once the racquet settled in. In this episode, we cover: What changed in the new Babolat Pure Aero and why it matters How the racquet feels after extended hitting (not just day one) Spin potential, comfort, and confidence from the baseline Who this Pure Aero is best suited for compared to past versions
When it comes to love, the secular world has plenty to say. Turn on the television, radio, or explore the internet, and you'll find advice on every type of relationship, from friendships to getting along with your parents, and of course, advice on romantic relationships. People are clamoring to figure love out…but what does the Bible say about love?1 Corinthians 13:4–5 says, “Love is patient, love is kind. It does not envy, it does not boast, it is not proud. It does not dishonor others, it is not self-seeking, it is not easily angered, it keeps no record of wrongs.”Love isn't about control or about protecting our own interests. It's not about winning, because true love doesn't keep score. God's Word gives us a beautiful picture of what love can be when we seek to follow the Lord: patient, kind, selfless, humble, and forgiving. That's the kind of love we should strive to build with family, friends, and a potential mate.If we want to take Paul's wisdom in Corinthians to heart, we must first think about what the other person needs in our relationships. And you know, if you practice this model, you'll find that soon enough, your own needs will be met, too! Pure love has a ripple effect.Let it wash over you!Let's pray. Lord, your relationship to us is perfect. Help us love as you love. In Jesus' name, amen. Change your shirt, and you can change the world! Save 15% Off your entire purchase of faith-based apparel + gifts at Kerusso.com with code KDD15.
It's the Pure Report annual predictions episode! We welcome Shawn Rosemarin to dive deep into the world of tech in 2026, including a look back at 2025 predictions on AI becoming a strategist, Multi-Cloud 2.0 requiring a unified data platform, and end-to-end security ramping up. Shawn holds himself accountable for last year's bets, particularly noting that the expected "operating model transformation" driven by AI has yet to fully materialize, arguing that many organizations are still grappling with the hard changes to people, process, and technology required for true transformation. Our conversation pivots to what's next, starting with the evolution of AI from simple co-pilots to autonomous agents that will soon become mature process owners capable of completing end-to-end workflows. This shift will require a greater emphasis on verification, changing the industry's focus from time to answer to time to trust (or time to truth) as enterprises build verification stacks to ensure AI accuracy, recognizing that every mistake costs money and customer satisfaction. Finally, Rosemarin forecasts that growing energy scarcity will drive new AI economics, forcing serious programs to run AI like a business system by routing queries to the most efficient models. Furthermore, he predicts that data stops being an asset and evolves to a supply chain, necessitating a manufacturing-like process to refine structured, semi-structured, and unstructured data for uniform consumption by training systems. This new landscape will ultimately punish infrastructure complexity and reward the platform mindset that simplifies operations and removes friction through automation and orchestration. To learn more, visit https://blog.purestorage.com/perspectives/2026-ai-predictions-data-storage/ Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 09:30 Look back at 2025 Predictions 17:33 William Gibson Quote on the Future 22:20 2026 Predictions - Copilots Become Agents 26:48 Verification and Time to Trust 30:30 Energy Scarcity and AI Economics 34:13 Data as a Supply Chain 38:50 Relevance Engines 42:10 Platform Mindset 45:43 Content Authenticity 49:37 Cyber as an Executive Imperative 52:35 Workforce Productivity 55:21 Summary of 2026 Predictions
Super Bowl 60 reignited the debate: boring game or defensive brilliance?On this episode of Sports Chasers Podcast – Monday Night Football Blitz, Kevin L. Warren and D-Dubbz Warren analyze how Seattle's defense controlled the game, why mistake-free quarterback play still wins championships, and what this Super Bowl says about modern NFL expectations.
While most fans called it boring, this was a perfect night for anyone who enjoys watching the New England Patriots fall flat on the biggest stage. We break down why Super Bowl 60 was far more entertaining than people want to admit, especially if you are a Jets fan or anyone tired of Patriots success. From Seattle Seahawks controlling the game with defense and the run, to New England Patriots looking overwhelmed offensively, this episode dives into why the outcome felt inevitable early. We talk about Drake Maye struggling under playoff pressure, why the Patriots' easy path finally caught up to them, and how Seattle won without needing a heroic performance from Sam Darnold. Was the game ugly? Absolutely. Was it satisfying? For at least one bitter Jets fan, it was four hours of joy. We also look ahead to what this loss means for the Patriots, why getting close does not guarantee you will ever get back, and whether Seattle can realistically repeat this formula next season.
Environmental Series. Episode #1 of 4. In 1851, a journalist named Henry Mayhew set out to document the lives of London's working poor. What he found was astonishing. In the richest city in the world, thousands of people made their living by picking through other people's trash. There were the bone-grubbers, who scavenged bones from gutters to sell to soap manufacturers. There were the mudlarks, mostly children, who waded through the filthy banks of the Thames searching for coal, rope, and bits of metal. And then there were the pure-finders. What's “pure” you ask? Well, "pure" was a Victorian euphemism for dog excrement. Pure-finders, mostly elderly women, spent their days scouring the streets of London for dog droppings, which they then sold by the pailful to tanneries in Bermondsey. The tanners used it to purify leather. Hence the name. We tend to think of recycling as a modern invention, something that started with the environmental movement of the 1970s. Blue bins, sorting instructions, that kind of thing. But as brilliant historians have uncovered, the story of how humans have dealt with their discarded materials stretches back millennia. For most of human history, the concept of "throwing something away" barely existed. To begin our series on environmental history, we're tackling the premodern history of recycling. Or as pre-WWII people would have called it: reclamation, salvage, scrapping, repair, and reuse. We'll meet rag-and-bone men and dustmen, shoddy masters and mudlarks. We'll discover how rags became paper, how old wool became new cloth, and how virtually nothing in the premodern world was ever truly waste. Find transcripts and show notes at www.digpodcast.org Learn more about your ad choices. Visit podcastchoices.com/adchoices
The Epstein Debacle Unfolds To support this ministry financially, visit: https://www.oneplace.com/donate/549/29?v=20251111
CA-CHOW!! Cars Full Reaction Watch Along: / thereelrejects Visit https://huel.com/rejects to get 15% off your order RATATOUILLE (2007) Movie Reaction: • RATATOUILLE (2007) MOVIE REACTION – WE DID... Gift Someone (Or Yourself) An RR Tee! https://shorturl.at/hekk2 The Jo(h)n Squad is back to give their CARS Reaction, Recap, Commentary, Analysis, Breakdown, & Spoiler Review! John Humphrey and Jon Maturan rev up their reaction and review of Pixar's 2006 animated classic Cars, a high-energy, heartwarming story about speed, humility, and finding purpose off the beaten path. The film follows hotshot rookie race car Lightning McQueen (voiced by Owen Wilson, Wedding Crashers, Midnight in Paris), whose obsession with winning lands him stranded in the forgotten desert town of Radiator Springs on the way to the Piston Cup Championship. What begins as an inconvenience slowly becomes a life-changing detour as Lightning learns the value of friendship, community, and slowing down to appreciate the journey. Along the way, Lightning forms an unlikely bond with wise tow truck Mater (Larry the Cable Guy, Larry the Cable Guy: Health Inspector, Cars 2), sparks a romance with determined attorney Sally Carrera (Bonnie Hunt, Jumanji, Toy Story 4), and gains mentorship from legendary racer Doc Hudson (Paul Newman, Cool Hand Luke, The Hustler). Packed with iconic moments like Lightning's crash on Route 66, Mater's hilarious tractor-tipping escapades, Doc's reveal as the Hudson Hornet, and the emotional Piston Cup finale, Cars blends laugh-out-loud humor with Pixar's signature emotional storytelling. We break down the film's themes of legacy, ego, and redemption, why Radiator Springs remains one of Pixar's most memorable worlds, and how Cars became a beloved franchise for generations of fans. Follow Jon Maturan: https://www.instagram.com/jonmaturan/?hl=en Intense Suspense by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Follow Us On Socials: Instagram: https://www.instagram.com/reelrejects/ Tik-Tok: https://www.tiktok.com/@reelrejects?lang=en Twitter: https://x.com/reelrejects Facebook: https://www.facebook.com/TheReelRejects/ Music Used In Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Happy Alley by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
The Pure - Blessed (Week 6) - Daniel O'Connor by C*Road Church
Happy Heavenly Birthday James Dewitt Yancey AKA Jay Dee AKA J Dilla. It's been 20 years since we lost one of the most innovative and influential music producers of our time. In our annual tradition on this day, we pay tribute to the life and work of J Dilla. #TurnItUp
The NBA Trade Deadline takes center stage in this episode of Sports Chasers Podcast – The Crossover featuring Ryan DeSouza.The panel delivers a no-narrative, objective breakdown of league-shifting trades, Giannis Antetokounmpo rumors, Ja Morant's future, and roster decisions from contenders and rebuilds alike — including the Cavaliers, Clippers, Knicks, Hornets, Rockets, Pistons, Jazz, Warriors, and Thunder.Highlighted Segment:
Toxins, chemicals, environmental exposure... How much is too much, how much should we worry, who should be concerned? The goal isn't to be afraid, but to understand how this fits into IBS management - listen to this episode of The Gut Show to learn more about TILT theory without going down a fear-based rabbit hole. Mentioned in this episode: MASTER Method Membership FREE IBS Warrior Summit Take the quiz: What's your poop personality? MCAS episode Thank you to our partners: mBIOTA is the next generation of the elemental diet. Developed with leading gastroenterologists and food scientists, it's the first formula that's both clinically effective and genuinely easy to drink. Pure, easily absorbed nutrients are essential, but the mBIOTA difference is in the details: from their proprietary Amino Taste Modification Technology (ATMT), to their fully vegan and gluten-free ingredients, mBIOTA provides balanced daily nutrition backed by science. The result is a game-changing medical-grade formula that helps restore GI function in patients with SIBO, IMO, IBS, Crohn's, EoE and more. Learn more at mbiota.com and save 20% off their 2 week protocol with the code GUTIVATE. FODZYME is the world's first enzyme supplement specialized to target FODMAPs. When sprinkled on or mixed with high-FODMAP meals, FODZYME's novel patent-pending enzyme blend breaks down fructan, GOS and lactose before they can trigger bloating, gas and other digestive issues. With FODZYME, enjoy garlic, onion, wheat, brussels sprouts, beans, dairy and more — worry free! Discover the power of FODZYME's digestive enzyme blend and eat the foods you love and miss. Visit fodzyme.com and save 20% off your first order with code THEGUTSHOW. One use per customer. ModifyHealth is the leader in evidence-based, medically-tailored meal delivery offering Monash Certified low FODMAP, Gluten free, and Mediterranean meals - expertly crafted to help you achieve better symptom control AND improve overall health. The best part? They make it easy by doing all prep work for you. Simply choose the meals you want, stock your fridge or freezer when meals arrive at your door, then heat and enjoy when you're ready. Delicious meals. Less stress. Complete peace of mind. Check out modifyhealth.com and save 35% off your first order plus free shipping across the US with code: THEGUTSHOW. Connect with Erin Judge, RD: Instagram TikTok Work with Erin FREE symptom tracker
In this Episode Permberton sounds awful Cast- Reza- LenaThe Magnificent Figaro- Danny DelucaGamemaster- Jared WitkofskyAl Key- Chris FrenchPerberton- Andrew Collins-AndersonKevin- Morgan JustTony 'The Toe' Tito- Chris ThielFeaturing music by Pressure Highway, Jordan Fickel, Danny Deluca and Motoshi Kosako This work is based on Blades in the Dark (found at http://www.bladesinthedark.com/), product of One Seven Design, developed and authored by John Harper, and licensed for our use under the Creative Commons Attribution 3.0 Unported license (http://creativecommons.org/licenses/by/3.0/).
Sportz and Thingz cranks up the heat this week with special guest Grant Mitchell from Full Command Podcast! SB showdown: Patriots or Seahawks? Faith in new Commanders DC Daronte Jones & DL coach Eric Henderson? Pick #7 steal? The Wizards get Anthony Davis, and does this make them officially playoff bound? Should the Wizards keep Keefe as HC? What Olympic Games are you watching? Plus this year's Grammy winners and much more! Pure fire! Tune In!
In this episode, Chris, Andrew, and David kick off with humorous stories about coding experiences across different languages, and then they welcome back guest Kevin Newton who shares his journey from Shopify to Meta. Then, Kevin discusses the intricacies of Ruby and Python, particularly the challenges and trade-offs in their runtime implementations. The conversation then shifts to the development and adoption of the Prism parser in Ruby, highlighting its impact on various projects. Lastly, Kevin shares insights on his work with a pure Ruby YAML parser and a regex engine, emphasizing the complexities and joys of coding and parsing languages. Hit download now!LinksJudoscale- Remote Ruby listener giftKevin Newton XKevin Newton GitHubKevin Newton BlogPython support for free threading A Ruby Regular Expression Engine (Blog post by Kevin Newton)Prism: Ruby 3.3's new error-tolerant parser (Blog post by Kevin Newton)A Ruby YAML parser (Blog post by Kevin Newton)Exreg Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter
The boys are BACK talking weekend shenanigans, Blue Waffles pond hockey, Sway gets his fight, Stadium series, Sturm vs officials, Poitras, Trade market ++ PLENTY more . Make sure to follow us on twitter @OnlyBruinsPod @DowntownBoosy2 @BrettHoward_ @BobbieBrewski. Follow us on tiktok @onlybruinsFollow us on instagram @OnlyBruins_Follow us on Youtube @OnlybruinspodcastMake sure to check out our Pure hockey link and get the best hockey gear out there! https://alnk.to/bisa9vc
FULL EPISODE! This time on the PURE TOKYOSCOPE Podcast, authors Matt Alt (Pure Invention: How Japan Made the Modern World) and Patrick Macias (Mondo Tokyo: Dispatches from a Secret Japan) are BACK with their first show of 2026! It's an all-current events episode with stuff about anime, Tokyo, and tech. Enjoy!Join the PURE TOKYOSCOPE Patreon!You'll get access to full episodes, bonus content, our Discord server, and an archive of past episodes. Head over to Pure TokyoScope Patreon to subscribe today!INFOMatt Alt on BlueskyPatrick Macias on BlueskyPure TokyoScope on YouTubeThe podcast is produced by jaPRESS LLC© and edited by Patrick MaciasTheme song by Marxy, v.o. by RInRin Doll
The Sports Chasers Episode 476 delivers a deep-dive into the NBA Trade Deadline and a full Super Bowl Seahawks vs Patriots preview.
Ad Free listening, exclusive episodes and early releases: https://www.patreon.com/c/Footballforkids OR Get the same perks via Apple Podcast subscription: https://podcasts.apple.com/gb/podcast/football-for-kids/id1627973563 Darren@Footballforkidspodcast.com Dan Burn Newcastle United story for kids football podcast England defender Wembley journey resilience never give up inspiration Premier League England From released academy kid to Wembley hero, this episode of Football For Kids tells the incredible true story of Dan Burn. Born in Blyth, a Newcastle United supporter long before he was a player, Dan's path was never the easy one. Released at eleven, working part time at Asda, battling injuries, travelling miles just to train, he kept going when most would have stopped. This is a warm, inspiring football story for kids and families about belief, hard work, and staying ready. We follow Dan from grassroots football and non league pitches to the Premier League, lifting a major trophy at Wembley for his boyhood club, and finally earning an England call up at thirty two. Perfect for young football fans, car journeys, bedtime listening and anyone who needs a reminder that progress is not always loud. A powerful lesson in resilience, patience and never giving up. Screen free. Fact based. Pure football magic. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Enio Augusto e Marcos Buosi falam sobre tudo que envolve o mundo dos tênis e também de outros acessórios relacionados à corrida.SEJA MEMBRO DO CANAL!!!Aqui tem análises, reviews, dicas, palpites, perguntas, respostas, números, valores e opinião. Informação com bom humor, dúvidas com resposta e conteúdo de sobra. Envie sua pergunta. Escute, aprenda, ensine e divirta-se com a gente.-Tudo sobre a linha Hyperwarp da Mizuno.Hyperwarp Pure, Hyperwarp Elite e Hyperwarp Pro.-Cupom de Desconto:CORRA BARATO - PFCKEEP RUNNING BRASIL - PFChttps://www.instagram.com/keeprunningbrasil/https://www.youtube.com/@KeepRunningBrasilhttps://www.facebook.com/keeprunningbrasilhttps://www.linkedin.com/company/keep-running-brasil/https://www.instagram.com/keepers.run/-SEJA MEMBRO DO CANAL NO YOUTUBE
Things go completely sideways on the Carton Show as Craig Carton and Chris McMonigle react to some of the wildest WFAN callers you'll ever hear. From Limo Driver Ed pitching his insured BMW limo services, to a BMX-riding superfan offering to be “Gay for Craig,” the phones deliver nonstop comedy. The conversation spirals into the viral “Gay for May” slogan, Super Bowl fandom insanity, and Craig's classic off-the-cuff humor that only WFAN can deliver. Pure chaos, pure laughs, pure Carton Show.
It's not about what you're doing in the decade you're in, it's about what those habits turn into in the next one. In this conversation we get into why we're leaning harder into relative strength and athletic movement (high-volume calisthenics, hangs, pull-ups, push-up variations, sandbags, sled work, reaction drills, rope flow, tennis balls, and even bouldering) without abandoning the basics of strength training. We talk about how coordination is trainable, why expanding your “movement vocabulary” can carry over into sport and everyday life, and how staying capable as you age comes down to building skills that keep you confident, reactive, and durable. Plus: insane feats of strength and speed making the rounds online, strongman grip madness, and why the goal isn't just being strong once, but being strong, mobile, and useful for decades.Special perks for our listeners below!
Attaché à sculpter « l'essence des choses », Constantin Brancusi aura bouleversé l'art moderne grâce à ses formes épurées dont s'inspireront les designers du monde entier.Né en 1876 dans un petit village roumain, Brancusi quitte très jeune son foyer pour explorer le pays, avant de prendre la route de Paris, la capitale des arts.
Attaché à sculpter « l'essence des choses », Constantin Brancusi aura bouleversé l'art moderne grâce à ses formes épurées dont s'inspireront les designers du monde entier.Plongez dans l'histoire des grands personnages et des évènements marquants qui ont façonné notre monde ! Avec enthousiasme et talent, Franck Ferrand vous révèle les coulisses de l'histoire avec un grand H, entre mystères, secrets et épisodes méconnus : un cadeau pour les amoureux du passé, de la préhistoire à l'histoire contemporaine.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Today on Pure Hoops, Justin Turpin (@justinmturpin) and the guys examine the Celtics' possible moves before the NBA trade deadline at 3:00 PM on Thursday.
Send us a textThe Smooth R&B song stylings of Ms. Toni Redd!!
We're dealing with the second half of the Beatitudes today. First a brief recap of where we were last week, and then jumping right into the rest of it. Blessed are the merciful, for they will be shown mercy. This is not at all how the world works. But when mercy is shown, it has the power to transform. Not only the one being shown mercy, but the one who is merciful. This is how we start making a better world. Instead of retribution, we show mercy and grace. Blessed are the pure in heart, for they will see God. This one is a hard one, and lots of people have weighed in and I'm really not sure, but let's take a crack at it. This one makes me uncomfortable to begin with. Pure of heart sounds like, spotless, untainted, perfect. We immediately connect it to some kind of moral perfection. But what if it's something different? What if pure heart refers to an undivided heart? A fearful heart sees a threatening God. A shameful heart sees a condemning God. A controlling heart sees a micromanaging God. What if we dwelled on the divine and the things of the divine? Things that are true, noble, right, lovely, admirable, excellent, and praiseworthy. When we desire those things - desire the divine and the things of God suddenly the whole world becomes charged with the presence of God. Blessed are the pure in heart. Blessed are the peacemakers, for they will be called children of God. Let's talk about a whole lot of other blesseds. Blessed are those who are brave enough to shake up the status quo. Blessed are those who are courageous enough to speak truth to power. Blessed are those who are willing to jump into the fray and speak words of grace and peace. This is tough stuff. If we actually start living this way, we should expect pushback. But Jesus says, "Blessed are those who are persecuted because of me, for theirs is the kingdom." God is on our side - with us - near. Speaker: Aaron Vis Scripture: Matthew 5:1-12 http://bible.com/events/49559249
TRANSCRIPT Gissele: Hello, and welcome to the Love and Compassion Podcast with Gissele. We believe that love and compassion have the power to heal our lives and our world. Don’t forget to like and subscribe for more amazing content. Today we’re talking with Rashi Nayar, and she’s on a mission to shift humanity from lower states of consciousness to higher states of consciousness. Gissele: I’m so, so excited to talk to her today. We’re gonna have a great conversation and she’s gonna do a practice with me. Maybe you can tag along as well. So welcome Rashi. Hi Gissele: Rashi. Rashi: Hi Gissele. Rashi: I’m so honored to be here with you. Gissele: Oh, thank you so much for being on the show. I’m really looking forward to it. Gissele: What led you to be on this mission to increase the consciousness of humanity? Rashi: My own path to increasing my own consciousness, you know, to operate from higher states of consciousness, which is peace, joy, and love. You know, these are actually who we are and we explore that more as we go along. Rashi: But I was very depressed for 18 years of my life, you know, since [00:01:00] 2007 when I lost my dog and in a car accident. And that was the first time I had experienced unconditional love that way, you know, someone loved me for who I am, not for, I had to prove myself or I had to perform. I had to be someone. Rashi: I could just be whatever. And he loved me that way, right? And it’s very beautiful to get that type of love from someone in that way. And when I lost him, he was only two years old and he met with a car accident and he died in my arms. But that was like it was like an opening. And it was like my heart broke for the very first time. Rashi: I had never experienced something like that before and I was grieving, but that was the first time I started asking questions like, who am I? Why am I here? What’s our true purpose? What is God? What is enlightenment? You know, all of that. Because what my soul was longing for was to connect back to that unconditional love that I had experienced from him. Rashi: But I didn’t know, [00:02:00] I was always looking outside, you know, outside myself. And I entered toxic relationships because I thought that other people were gonna give that to me. I was very disappointed and I was very depressed. I wasn’t chronically depressed. I was depressed, but I was also living in a low, low grade anxiety for a very, like, very long time until 2025. Rashi: This year when I lost another family member, I lost my aunt to ms. So that episode really shook me to the core and it forced me to sit in stillness with just with myself. Like no more reading books, no more going outwards, right? Because that’s what I always did. I would go to a spiritual retreat. Rashi: I would, you know, go outwards, read books, do therapies, you know, do coaching. I did a lot of work, technically a lot of healing work, and maybe that was required, but. Nothing really significantly changed. You know, I was still the same. I was [00:03:00] still living with low grade anxiety and I was still the same. And but this time I went inwards and I connected with the part of myself that is infinite, that is peaceful, that is love. Rashi: And I realized that everything that I thought about myself or the identity that was caring was actually not who I truly was or not, or not who I am. The identities or the masks that I was wearing, you know, the mom, the entrepreneur, and the aunt and the friend, all of those were really masks and identities that I was carrying. Rashi: But who I truly am, my most authentic self is actually free already. She’s already free. And it’s not even a, she, I wouldn’t even, we cannot really label, right? It’s, it’s. The vast infinite being that we are is inherently peaceful. Is [00:04:00] inherently open. Infinitely joyful. Infinitely blissful and loving. Rashi: Compassionate. That peaceful, that’s who we are inherently. And I, stayed in that high, right? Let’s just say I was in those higher states of consciousness for three days straight and I was floating. Gissele: Mm-hmm. Rashi: Yeah. I was so high. But then came the day I went down, the anxiety was back again, and I was like, wait, I thought I was enlightened. Gissele: I did it. What happened? Rashi: But that is what what’s supposed to happen, because now. I could see the contrast, right? I had experienced something so profound, and now there’s the contrast or the lower states of consciousness, which is fear, anxiety, lack. I was back, I was back in the fully humanness, you know, the human part of me, but [00:05:00] now my aunts, so she passed away and three days later she, she was in my head, she kept telling me, Rashi, love yourself. Rashi: Rashi, love yourself rash. It’s like, it was constant. And I realized that I didn’t love the parts of me that were so-called dark or negative. I was trying to get rid of anxiety. I was trying to get rid of the darkness, right? I was trying to resist whatever I was experiencing in the moment, and that was profound because now my only job is to love myself unconditionally. Rashi: In all parts of myself, the shadows they call it in the psychology. But I realized that the parts that I’m trying to get rid of, the anxiety, the so-called depression, the low level depression that I was constantly feeling the numbness or the sometimes of sometimes just sadness, [00:06:00] like it would just come up. Rashi: What if I fell in love with those parts of myself? Then what would happen? And that became the journey that became the practice. And when I did that, I no longer resisted those. So it was just the experience and me in love with whatever what is right, whatever the experience is. And now I’m whole, now I’m not broken, you know, there’s some, nothing’s wrong with me. Rashi: You know, and that was the narrative that I lived with for 18 years. If something is wrong with me, I need to be fixed. I need the healing, I need the therapy. But really there is nothing inherently is wrong with me. We all experienced this human side of things and what if I fell in love with the humanness, Rashi: And that’s why the being that I experienced, so in those three days when I experienced the so-called enlightenment or the awakening, it was when I touched my being. And our being is inherently free. We who we are, our [00:07:00] authenticity, we are inherently free. We are peaceful. And yet the human side of things or you know, how we grow up, our conditioning, our identity, our beliefs that we carry, all of that is there. Rashi: And that is the conditioning. So the constructed itself or the human is still there, but we cannot try to get rid of it. It’s like, you know, the snake leaves its skin. By its own. We cannot force the skin. We cannot rip the skin out of the snake, you know? So it’s going to happen only when we fully and completely fall in love with who we are in the humanness. Rashi: And that brings me back to that connection, to that love, to that peace that resides within all of us. So that’s in a nutshell, that that’s the story. That’s why I do what I do. Gissele: beautifully said. First I wanna go back to the, the loss of your dog as a person who had a dog. Gissele: Never wanted a dog to be honest, but we got one for a family and felt completely in love with the dog. And after [00:08:00] 13 years to have lost him. And I realize now that he had to go the way that he did. But he did teach me about unconditional love and patience and forgiveness and joy. And so the grief that you experience after having that can feel very overwhelming. And so where I was going with this question is, the human experience can feel so real, I have sat with some really difficult emotions it’s almost as if your mind tells you that something’s gonna happen something bad or you’re gonna die. Gissele: What do you say to people that say, you know, This is all we are because this is what we can concretely see and touch and experience. How do you go from that to understanding and embodying the fact that we are more than this reality? Rashi: Yes. Oh, that’s such an important question. Something that I live with almost every day. Rashi: You know, there’s this low grade anxiety that I still experience on a daily basis. [00:09:00] The only thing that’s different is I’m no longer resisting it. Gissele: Hmm. Rashi: So, you know, and we human beings, we are either, we’re only living in two A states at all time. We’re either to attach to the state that we want, which has happiness, joy, love, bliss, or we are resisting the lower states of consciousness, which is anxiety. Rashi: We’re really in, in these two states or all times. So it’s like when we get that love from the dog or the baby, you know, I have two babies, two little girls. And I’m like, I want it all the time. Right. So now there’s attachment, because if she says something like, I have a 4-year-old, which is a, she’s a very mischievous toddler. Rashi: Right. When you say something that can feel like hurtful. I mean, I don’t take her things seriously because I know better, but Gissele: yeah, Rashi: for someone else it could feel like, what, what would just happen? Like we were in love and now, or the, the spouse says something, right? Like, I have my husband who really triggers me, so he’s, he’s like my [00:10:00] best enemy, right? Rashi: Like he’s my favorite person, so mm-hmm. He says some things that can feel hurtful, and in the beginning it really used to bother me because I would resist those things. I would resist the experience of whatever’s happening in the moment, right? But now I lean into it, and that’s the difference when we are getting this anxiety or when we are getting something and the experience doesn’t feel pleasant. Rashi: The mind itself because the mind is like that. Mind wants to go navigate towards pleasure and it wants to avoid pain. That’s how the mind is, right? Gissele: Mm-hmm. Rashi: But we are not the mind though. So in the moment, if we can witness the mind’s neuros, whatever it does is like trying to resist. What we do is we say, first I love you mind. Rashi: Because the thing is the mind in itself is what it’s doing. It’s movement what it’s supposed to be doing. [00:11:00] And the second thing is, I love you, anxiety and that love it. It’s the experience that feels heavy, that feels not good, right? And that experience now is infused with love. So there’s no longer a problem with what is, with the experience itself. Rashi: And there’s a beautiful book written by Byron Kitty and her, the name of the book is Loving What Is, and apparently, you know, she’s enlightened, you know, every like, so she’s the enlightened being, right? We can talk in that way. I’m not enlightened for sure, but that’s what she meant. I didn’t understand it back then. Rashi: But this is what she means is whatever our experience is, if we are not attaching ourself to it, which means we are not craving more of that, or we are not resisting that, [00:12:00] then we have no problem with the experience. So the experience in itself is not a problem, Gissele. It’s our relationship with the experience that’s the problem. Rashi: So the anxiety in itself is not a problem. It’s how I relate to anxiety, how I see it. That in itself is the issue here. So if we’re like, okay, anxiety is here, can I love it? Can I lean into it? And when I do, and it can feel scary because some people might think that if I lean into that, that means it’s gonna expand, it’s gonna grow more. Rashi: Right? That’s sometimes where the belief is, and I definitely have that, but it’s actually what happens is the other way that anxiety or that bubble becomes love. And you know, there’s a great saint in India, I really, really respect him. He’s no longer in body and that’s, I always keep this picture over here. Rashi: Mm-hmm. [00:13:00] His name is named Carol Baba, and he was apparently he’s the same behind Apple. You know, Steve Jobs went to his temple. Rashi: I love him. I’ve never met him, but somehow I love him. Rashi: And, you know, love has no logic. Gissele: And it has no boundary either. It doesn’t, it doesn’t mean that you can’t love somebody who’s passing. And I think that’s the difficulty perception about, we think that when somebody crosses over that the love ends. I still love my dog bear and I still think about him. Gissele: I think about caressing him. I think about, I talk to him. But anyways, go on. Rashi: Yes, you’re right. Exactly. So, because love is unconditional and love is who we are. Mm-hmm. Which I’m going to take you back to so you can experience it yourself. But he used to say that suffering brings us closer to God. Rashi: Mm. And God is love. And so suffering, meaning anxiety, pain, whatever, chronic pain. I mean, people who are his devotees and people who have written books about him, they [00:14:00] said that, I’m so glad that there’s this pain in my life because it helps me take back to him love or God. And that’s exactly what we’re doing here, is we are saying, whatever comes to our experience, I love you. Rashi: Anxiety, I love you. Guilt, depression, grief, It can feel really hard in that moment, but that is the portal, the bridge between the lower states of consciousness, which is anxiety, fear, all of that to higher states of consciousness, which is love, peace, joy, abundance, that love and saying it mentally in the beginning it could feel like a mental repetition. Rashi: Everything is like, and then you’re like, I love you. I honor you. Even if you’re here, I love myself and I love, I mean, that’s loving kindness. The practice of loving kindness meta in Buddhism is loving ourselves and then loving people in our lives and loving [00:15:00] what is, you know, so that’s a tool that if people can use then, you know, I would love to hear how their life transforms. Gissele: Hmm. Yeah. it’s definitely something that I use myself and what I realized was that the more love I had in my heart for myself, the more it overflowed to other people. Like I didn’t need them to be different. I didn’t need them to change ’cause I didn’t need them to give me anything. Gissele: I really resonated with what you’re talking about, resistance. I noticed that one thing about myself is when I encountered the most resistance to what was happening, my inability to accept and surrender, had to do with my belief that if I surrendered, I was giving up. Gissele: That was accepting. What is that? it’s like saying that there was no hope or no chance Rashi: Mm-hmm. Gissele: I didn’t realize that the deeper thinking behind my resistance had to do with that. This has power over me, so if I give into it, it’ll take me, it’ll do what it wants to do. Correct. And so when I let go of that story [00:16:00] and allowed myself to surrender, there was a level of peace, but it was hard to get there. Gissele: I just wanna acknowledge what you’re talking about is so brilliant, but it can feel really challenging. And it doesn’t have to, but it can. Because I remember when I would ask for guidance from my higher self God source universe, the guidance that I always got was Love it. Choose it. Gissele: And I’m like, well, I don’t wanna choose this. I don’t wanna accept this. And so, but I would lie to myself thinking that I was not in resistance, but I was in resistance. ’cause my body was so tight. Rashi: Yeah. Gissele: And so, it can feel difficult to let go of that resistance. And we are. Gissele: Not really taught to surrender. we’re doers. Rashi: I just gotta keep grinding it out and eventually this is gonna come through. Gissele: how is that counterintuitive to allow love? Rashi: I love that question because I was exactly what you’re describing. For 11 years of my life, I was a [00:17:00] serial entrepreneur. I’ve scaled my own businesses to seven figures plus. And I learned it from my dad. Rashi: You know, it’s a learned behavior. You keep pushing through, you just keep doing, you know, and that’s discipline. Yeah. And consistency. Like those words feel really good. Discipline, consistency and but it didn’t feel good to my body. Gissele: Oh, Rashi: right. It does. It feels like, oh, it, it felt like I’m choking, but I still kept pushing through and I burned out very much. Rashi: So that’s why, you know, I no longer do what I used to do for 11 years and it just didn’t feel aligned anymore. I wanted to open my heart. I wanted to lead from the heart. So, to answer your question, Gissele, when you say that you are the doer, I wanna take you into this is again, a constructed and identity. Gissele: Yeah. Rashi: Right. This is, again, something that we have [00:18:00] adopted from our environment and from our parents, maybe from our teachers, someone we really admired because they had this habit of keep going and it felt really inspiring, right? Because they accomplished so much and the narrative that we. Play in our head is if we keep doing that means, you know, we’re bring, we’re service. Rashi: This is service to humanity and we’re serving, we’re adding value. All of that feels really good, right? Gissele: Mm-hmm. Rashi: And it feels like we’re in service. But the highest service, and I haven’t come to that point myself, but I get glimpses of that, is surrender. And I’ll tell you why. The highest service is surrender is because when we are surrendered, we are now the channel for God will to flow through us what God wants us. Rashi: And that is the path of least resistance. The [00:19:00] path of least resistance is when we are, it’s not my will, it’s God’s will. The problem. The problem, we don’t have a problem. The brain has a problem. And this is, now, let’s go back to scientifically, understanding the scientifically how this works is the brain wants to solve problems because our brain is from the ancestors we lived. Rashi: Our brain is coming from survival. You know, it, it doesn’t know how to thrive. It knows how to survive, right? And survival means keep pushing through. It means keep solving problems because there could be a line behind us and if we don’t solve problems, we are gonna die. So the brain is used to solving problems. Rashi: So it’s not necessarily you that wants to do, it’s your brain that wants to fix the problem. Gissele: Mm-hmm. Rashi: So Rashi: once you understand who you are, then you don’t relate to your brain as yourself. That, and that’s what we do, is we relate to our brain’s [00:20:00] mechanism or our mind’s workings as ourselves. We identify that that’s who I am, but that’s not who we are. Rashi: when we realize who we are, then we are free. Then we can see the workings of the mind as the workings of the mind. And we’re like, ah, that’s what the mind wants us to do right now. But what do I wanna do? Which means I, the, which I’m gonna take you to let you experience that for yourself. So we can do that whenever you’re ready. Gissele: Yeah, of course. I just wanted to mention a couple more things. in my life surrender has been so fundamental. Mm-hmm. It’s led to some magical things happening. But what I noticed was that on the things that mattered the most to me, or had the most limiting beliefs about surrendering is really difficult. Gissele: Mm-hmm. I could surrender, like small things or things that I believed could happen, but the things that were bigger, that bigger than I thought I could hold in my container, I [00:21:00] had a hard time really releasing or surrendering. Rashi: Mm-hmm. Gissele: And so for me, the, the whole concept of surrendering has been a minute by minute step by step by step. Gissele: I’m surrendering a little bit more. ’cause people think, well, I just surrender and then it’s. But if you have limiting beliefs around it, surrender can feel really dangerous. It can feel, it can feel unsafe. And that was one of the things that, the word that came up for me every time I tried to surrender about the different things I was surrendering about is like, this feels unsafe. Gissele: This feels unsafe. So like you said, being able to soothe your mind in, in your emotions and saying, you’re safe. You know, we got this. Mm-hmm. we’re just taking a baby step. That, for me, has gone a long way, Gissele: I continue to surrender more and more every single day and it feels so good to not feel like you have to carry the whole world with you. That you have God, Source, Universe helping you. And usually things turn out way better than I even anticipated. but here’s how stubborn I am [00:22:00] or this ego person is. Gissele: That should have been enough. Like how many times does the universe have to show me, like these magical things. And I’m like, well, but not in this case. Gissele: I wanted to ask you a couple more questions. The first one is talking about who we are. I’ve heard many people that say that we are God because everything is God source energy. We are God, we are made from that. from the same source and that God’s will is our will and our will is God’s will. And I had to kind of grapple with that. Gissele: And the reason being is because it’s not that I think it’s like blasphemous or anything like that, is that I kind of fell into a pitfall where I thought I could force my will. Rashi: Yeah. Gissele: Rather than being like, what’s my genuine will? what’s my genuine identity? and if I truly believed it, I wouldn’t be resistant to anything. Gissele: If I truly believed I was a creator of my life, of my thoughts and emotions and [00:23:00] God was working through me and I’m made up of the same juice as everything else, and I wouldn’t resist anything in my life. I would just choose something else. Gissele: Just curious as to your thoughts about that. Rashi: Wow. Again, this is amazing because yes, we are God, but yes, we are also humans, you know? Gissele: Mm-hmm. Rashi: God gave us this body, very limited body, right? I mean, where I come from, the Hindu culture, in our religion, we have flying gods. Rashi: You know, there’s a monkey, God called Hanman. I don’t know if you’ve heard of him. He used to fly, right? And so he has completely crossed the gravity, right? He is broken all the laws. So neem, KLI, Baba, he was apparently the avatar of Numan because he could be in three different places at the same time. So people in Delhi were like Baba’s with us, but in people in Aaba, they, but Baba’s with us has that possible. Rashi: And then there’s people in Bombay, they’re like, but Baba’s with us. How is that possible? So he completely nullified [00:24:00] the, the laws of the universe, which is laws of gravity. And he was a, people used to say that he was God, and so he had commanded or he had done a lot of, or sadana, which is a lot of the yogic practices to come to that. Rashi: But we don’t do that. You know, we’re mothers and we live in a household, so obviously we don’t have that luxury to, you know, meditate first since morning until night. We can’t do that. Yeah. So, right. So we have to address, we have to understand that we are limited in the body sense, but we are also unlimited with our mindsets that what we can think we can create. Rashi: So in that sense, yes, we are God, but yes, we are also a human being. So the ego in itself is not a problem. That’s what I wanted to say is ego in itself is not a problem as long as we can witness. Stay as the witness and we can witness the ego play [00:25:00] out. Gissele: Yeah. Rashi: Ego, meaning the constructed self. And also if we talk about the brain, the brain has a certain neurological pathway, a neural pathway that has been established and the non-dualistic teachings, the avea, they call it the spider web. Rashi: or the veil. the Christians call it the veil, and it’s the neural pathway in the brain that has been established as our identity, our beliefs, our thoughts, our perceptions. Mm-hmm. All of who we think we are, the constructed self or the ego. We are getting away from that, you know, and I, at least I have 39 years of that to get away from that. Rashi: To collapse that completely and to come to higher states of consciousness, which is completely a new neural pathway. Establishing that is a muscle, it’s almost like lifting weights in the gym. It takes practice. So this is a practice, and like you said, the [00:26:00] surrender is not a one, one thing. I mean, Gissele: yeah. Rashi: I think Ekhart Tolle he’s written about this, that the surrender just happened and he just disappeared. Right. And he became enlightened just like that, which I thought I had experienced before. But there are some beings that have experienced that, and they stayed in that bliss and that joy, I don’t know what that is to feel like for me it’s a practice and I don’t have a problem with that. Rashi: I’ll tell you why. Because I’m able to see the constructed self and the neurosis that come with the constructed self itself for sad. You know? Gissele: Mm-hmm. Rashi: I wanna see it like that. I want this to unfold as it is unfolding, because then the suffering, the ego is a portal. It becomes an invitation to come back to myself every single day. Rashi: Every single day. Now, I’m a conscious creator. I’m consciously choosing to [00:27:00] return to my original state, which is peace, which is love, which is joy, which is compassion. there’s a part of me, the ego, and I can still hear the voice be like, are you kidding? You? You not wanna be enlightened? Rashi: Like, forget about all of this. I’m no longer chasing it. For 11 years, I did chase the enlightenment. It becomes the shiny object, right? As we are chasing the seven figures, we wanna be a millionaire. It’s the same thing with spiritual money, which is enlightenment. Rashi: Everyone wants that. But what’s the problem with us right now? What if there is no problem with us as we are? That’s, you know what if the way you’re surrendering is the way you’re surrendering is the way you’re being, is the way you’re healing is the way you’re healing is exactly how it’s supposed to be. Rashi: It makes you whole and complete. It’s how the creator wants to experience herself through you with all the mess. It feels very [00:28:00] messy. Yeah, but what if that’s how it is supposed to be? And that is what is like if you’re not resist surrendering, that’s perfect. No, no problem with that. So. We can have a spiritual identity as well. Rashi: You know, spiritual people are high, right? That’s all of the identity They’re not supposed to resist, they’re supposed to surrender. That could be a contracted self as well. So what the invitation here is to just live as yourself completely and to love yourself and meet yourself for where you are. Rashi: And I think you’re doing a great Rashi: job at that Gissele.. Gissele: Thank you. you mentioned, spiritual people. I feel like what I chose to come here to learn was really to learn about love. Mm-hmm. Like true unconditional love and compassion. And Gissele: I understand it. I can say to you, we must love all including those who we deem as our enemies . In fact, some of our enemies are our [00:29:00] best friends because they are helping us remember who we are. Rashi: Okay. Gissele: And yet there is a small part of me that still believes that some people that behave in negative ways, that are very hurtful, that they should be fought or that we should fight injustice and fight oppression. Gissele: Even though to me that’s just another level of resistance. Right? But there’s like this little me, this little kid because of her family dynamics that still see somebody as like somebody needing that saving and other people needing to be less, selfish, And so, and that’s what I’m grappling with. Gissele: To create a true, loving, equitable, compassionate world for all. I have to emphasize the all, it has to include those who are most hurtful. It has to include people Yeah. Who are hurting other people And so I think that’s the thing I grapple with. On the one hand, [00:30:00] I can understand that we’re not really this reality, that this is just sort of like a play. Gissele: Right? And yet at the same time, it’s hard for me to witness the suffering of people who are, don’t believe that or are not experiencing that. And to see people suffer on a daily basis Rashi: Yeah, exactly. Rashi: Exactly. Very, very powerful what you just said. And I wanna ask you a question here. You said there’s a part of me. That still doesn’t really like that, you know? Gissele: Hmm. Rashi: There’s a part of me that doesn’t really, that’s resisting my invitation is what would happen if you really fell in love with this part of yourself that’s not loving? Gissele: Mm-hmm. Rashi: because then there’s freedom to really be, we include all dualities within us. We do, we are the saint and we are the [00:31:00] sinner. Because the seed of whatever the other sinner is doing is within us as well. Rashi: It’s just, we’re not choosing to act on it. That’s all we’re doing, but the seed is there. I mean, we still get negative thoughts. I remember I used to get thoughts like hate hating other people. I would get jealous of other women or like all of that. Rashi: Right? So apparently less than wholly less than saintly. Right. That’s who I am. What’s the problem with that? that’s the thing. If I can accept and love the parts of me that don’t feel so holy, that don’t feel so loving, then what would happen? Then I’m free. Gissele: Hmm. Rashi: Right. So that’s the invitation, because the thing is who you are, Gissele everything is it? Rashi: It apparently looks like the world is happening outside of us. It looks like that. Like we have a body and the world like me. I’m happening outside of you in the Zoom room, but [00:32:00] actually I’m Happening within you. Because you are awareness who we are. We are pure awareness. let me take you back to when we are babies. Rashi: Right? So when the baby’s born fresh out of the mother’s womb, it never says I am Rashi. No. Right? It never says I’m a girl or a boy. It doesn’t say I’m zero years old. Nothing. Right? But what it, what? It’s in a state. It’s in pure being state. Pure being, which means aware or I am. Gissele: Hmm. Rashi: Just this.. I’m not this or that. Rashi: I am. And when we say this to ourself, and I would, I want to invite you, Gissele, to say this to yourself when you can even close your eyes because I really want you to experience this firsthand and even the listeners. Yeah, of course. Rashi: Okay, so, alright, so just close your [00:33:00] eyes. Okay, so now go back to when you were a baby, and I don’t want you to go back and track your memory because you might not have a memory of being a baby, but I want you to have this as an experience, like a direct experience and directly experience yourself as just being born Rashi: fresh. Rashi: No thoughts, no emotions, particularly no judgements, no perceptions. It’s just this pure state of I am Rashi: or I am aware. Rashi: Pure awareness, pure presence, pure being.[00:34:00] Rashi: See yourself, have a direct experience of yourself without any name, without form, without any identity. Just pure nothingness. And Rashi: let me know when you’re there. Gissele: Okay? Gissele: I’m there. Rashi: Okay. So stay as you are. This is your original nature, original state of being. Stay as you are. If any thought arrives or comes to your awareness, you can just ask it to wait outside. We’ll ask it to wait outside the zoom room for a bit and we can [00:35:00] take our thoughts later on. We can pick up our identity later on. Rashi: You can pick up your name, beliefs, everything later on. But for now, just stay as you are. I am. Rashi: And now I’m gonna ask you some questions about your true nature. So as you are just the state of I amness, just pure awareness, are you inherently peaceful or your inherently disturbed? Rashi: Mm-hmm. Yes. Okay. So as you are. I am. The other question is, are you open or you’re closed.[00:36:00] Gissele: Open. Rashi: Mm-hmm. Open right now. Stay as you are. Just empty, empty, empty. Stay as the awareness that you are Rashi: now as you are. The next question is, do you have an age? Gissele: No. Rashi: No? Okay. Hmm. Okay. Stay as you are. So if you don’t have an age, were you ever born? Rashi: Yes. Rashi: I want you to even bring your memories out. Take your memories outside the zoom room, keep them out, and just stay as you are. Come back to just pure awareness. [00:37:00] And the invitation here is to have a direct experience of who you are. So as you are, who doesn’t have an age, were you ever born? No. Mm. So if you were never born, will you ever die? Rashi: No. Yes, exactly. And stay as you are. We’re going to go deeper. Rashi: When you stay as you are direct experience, Rashi: are you finite? Which means can you be put into a box like a body, or you are infinite and the body is also within you. Just see this, see this very clearly, and I want you to have a direct experience. Your mind might tell you something else, but that’s [00:38:00] just a thought. So I want you to have a direct experience of this. Rashi: Stay as you are. Are you finite or you’re infinite? Rashi: Are there any boundaries Rashi: between you and the experience Rashi: as you are? Rashi: No. No. Right. Rashi: Hmm. Rashi: Are you naturally accepting as you are or you are naturally in resistance, Gissele: naturally accepting? Rashi: Hmm, yes. Rashi: As you are? [00:39:00] Is there a problem? Gissele: No. There are no problems. Rashi: There are no problems. So as you are, are you whole and complete Rashi: or do you need anything to complete you? Gissele: No. Rashi: Hmm. Okay. So whatever you just said, and I have coached so many people around this, I have taken so many people into this experience. Everyone had the same answer as you. So who we are is this infinite being that is inherently peaceful, that is inherently [00:40:00] infinite eternal, which means doesn’t die, was never born, and has no problems, is naturally accepting, doesn’t need anyone to complete her. Rashi: This whole is peaceful, accepting, loving. That’s a natural state of being, Rashi: and that makes us one, Rashi: that’s who the other person is as well. Rashi: And if you stay as you are, there’s a last question I wanna ask you come back to. I am. Do you even need God to fulfill you here as you are? [00:41:00] Gissele: No Rashi: Mm. So you need no one to complete you because in itself you are inherently complete. Rashi: So just now we’re gonna come out of the experience and you can just take your time just. Maybe rub your hands and slowly, when you’re ready, you can open your eyes. Gissele: Hmm. It’s interesting ’cause when I was in this class, I had an experience where I went into meditation and went into that same void and it was like nothing I’d ever experienced. I don’t think I’ve ever shared this in this podcast. It was like, I wasn’t my body. I wasn’t anybody. and I had pretty bad anxiety in those times. Gissele: And I didn’t have anything. I didn’t have anxiety, I didn’t have anything. But I didn’t wanna return. And so I guess whoever was leading the class had to kind of bring me back and [00:42:00] then and that was really skeptical in those moments. And so I thought, well, maybe this is my imagination until I got home. Gissele: And, and the babysitter kept saying that my daughter was hysterical. ’cause she kept saying, mommy isn’t coming back. She isn’t coming back. Rashi: Oh. Gissele: And Gissele: so, yeah. So that, that was interesting. And so I thought to myself, well, I don’t ever wanna go that deeply into anything so that I don’t like the choice not to come back. Gissele: But and so I’ve been trying to go to that void. But it was surprisingly easy I think what helped me was really, like you said, keep your thoughts at the door, And that was helpful. It was surprising how much I could just not think of something. Mm-hmm. And then when I observed myself thinking something, I could just say, no, go back to the door. Gissele: But I was also at one point wanting to not even like, listen to your questions either. I was just gonna be like, okay, I wonder if I should keep everything at the door. Rashi: Yeah. Gissele: But then when I let your questions in sometimes, then I would move to something else. Then I would go to a thought, which [00:43:00] means I had to go back and go, Nope, you gotta go back to the door. Gissele: Yeah. But I was great and, and it’s so surprisingly simple to remember. I just find that sometimes like to go back and hold onto those identities of like, oh, this is hard, or I’m getting stuck in anxiety. Yeah, Rashi: sure. Rashi: Yeah, Gissele: so, I have to be really conscious of Gissele: A story I’m telling myself about myself, right? Like, how much of a story am I telling about what identity I hold or what I think should be? And so the more I create a distance between the stories of who I think I am and who other people are, the more than I find I open myself to seeing their divinity in myself and and other people. Gissele: But it took me a long time to figure out that the loving all wasn’t just myself and people. It was everything. Rashi: Mm-hmm. Gissele: It Gissele: was, it was those things that we struggle with, all of it. Yeah. and there’s certain parts of the journey that I’m learning to love [00:44:00] more. Gissele: like what I was talking about, seeing children suffer it’s hard to bear as a human, quote unquote. Rashi: Yeah. Gissele: And yet I have to remind myself that that doesn’t mean I don’t do the things that I came here to do. This is why my mission is not just to learn the love for myself, but also to share that with others, whether it be helpful for them or not, not from a place of I need you to change, but from a place of like, this could be helpful to you. Gissele: Yeah. But it’s an interesting journey, isn’t it? Rashi: It is. And you know, it’s hard to bear witness to the suffering of other people. That’s because we love so much. Yeah. Gissele: Mm-hmm. Rashi: Right? And it is hard. But the thing is that. Sometimes we get into the trap that, you know, we are supposed to be loving people, so we should be loving everyone, right? Gissele: Mm-hmm. Rashi: And when someone is doing less than loving things, we are like, oh, but I’m supposed to be loving person. I mean, I have this [00:45:00] podcast called Love and Compassion. I’m like, right, yeah. But those parts of us require the most loving, you know, there are times where, and it, this has been the hardest for me because my husband, like I said, is my biggest frenemy, right? Rashi: And he really triggers me. He shows me where I’m not free yet. So he says something and I’m not loving him in that moment, for sure. Rashi: Yeah. Rashi: Because he is pushing too many buttons, and I’m like, outta it. And the thing is, I have learned to love myself. Even when I’m not loving him now. There’s no resistance. Rashi: You know? Now I can see the neurosis of him and me, and there’s no problem. So he says something and then, you know, it’s so interesting what happens recently it started happening is when I’m like, you know, alright, I love you. Even if you’re not loving towards him in that moment, there’s a shift, there’s a very subtle shift. Rashi: It’s very [00:46:00] subtle. And now it, I’m not taking him so seriously, you know, all of this, the thing. And then he sees that I’m not taking it serious. And it’s very much in the heat of the moment, right? And he sees that, he sees presence, that I’m just quiet and I’m pouring love on myself right now. And somehow because I, the lens at which I, I’m seeing myself is changing the lens at what, how I’m seeing him as changing at the same time. Rashi: And now his lens at how he sees me and himself changes in that moment. And then he would laugh out of nowhere and, you know, and the whole serious thing becomes a funny thing now. And that’s the interesting part, is what the highest service we can do to humanity is to love all parts of ourselves, the non holy Rashi: parts, Rashi: the non loving parts. Rashi: If we can love those parts in which we like, I shouldn’t be like that. Oh, [00:47:00] actually, you know what, what? What if you love the part of you that’s being like that? Because who you are is inherently peaceful. It’s inherently loving, it’s inherently accepting. So in that moment, whatever is not accepting is the ego. Rashi: So the invitation here is to love the ego, the constructed self. Only then we can be free. Only then we can be free to be who we are, because the ego dissolves in that. When it’s seen with the light of awareness, shines on it seen and the constructed self is. Gone in that moment and then the construct itself comes again. Rashi: So this is a practice. Yeah. And at some point we’re like, you know, the Buddha used to say, we are like Bodhi, you know, we’re walking people home. That’s why we are here in this world is we’re not the Buddha yet. We’re not in like, because then we’re away from the Maya or the illusion, but we are part of the illusion so [00:48:00] that we can take people home together. Rashi: We’re walking each other home. That’s what Ram does used Rashi: to say. And yeah. I love Gissele: that. I love that. Mm-hmm. I’m doing something called Kriya yoga. Have you heard of it? Rashi: Kriya yoga? Gissele: Yeah. Rashi: With Yogananda Gissele: with yoga, yes. Yogananda. Yeah, that’s right. Rashi: Right. Gissele: I just started, yeah, Rashi: I’ve heard of it, but I’ve never done it. Rashi: So how is that going? Gissele: Fabulous. I just started But it’s interesting. Sometimes even very short practices have a big impact. Mm-hmm. it’s really interesting ’cause you don’t think like you’re doing anything. And to be honest, I came into it a little bit skeptical in terms of like, I’m used to meditating for two, three hours and I think you’re supposed to be doing like an ongoing, because I’m just learning it, I’m just starting with little practices. Gissele: But the little practices have been really powerful. Rashi: It’s the little ones that are more powerful, you know, the loving, the act of loving oneself and seeing parts [00:49:00] of us, it requires a very high level of self-awareness. You know, it’s just like we’re catching ourselves just before the ego has started to take control. Rashi: And that practice, I feel, if we can do it in action, because we live in such a busy life, right? Gissele: Yeah. Rashi: It’s a luxury to even sit in meditation for so long. You know? It’s so, I mean, it’s a privilege almost like these days, I wish, sometimes I wish I could go to these 10 day, the pasta meditation retreats and just like, yeah, Gissele: me too. Gissele: I wanna go to India. Rashi: Oh my God. Like, yeah. Rashi: If we can do meditation in action, I feel that that’s more effective then, you know, going uphill or sitting in a cave and you know, because then we come in the world anyway. Rashi: And I remember Ram Dass again used to say, if you think you’re enlightened, go and live with your family for the weekend and then come back and tell me how enlightened you are. Gissele: I don’t wanna say it’s was easier, but you can go to a cave somewhere and I think that’s what needed to happen with certain [00:50:00] yogis in terms of helping us lift the consciousness. Gissele: Sure. So that was what happened then. Exactly. But it is a lot harder, and I think I was reading this in Yogananda’s book, the, the path of the householder is much more difficult. ’cause you, you talked about the war within ourselves, there’s so many families that are in, like, they’re not talking to one another. Gissele: There’s so much conflict within Of course we have wars, the world, we’re in conflict with ourselves. And even with the people closest to us, we can’t even get to that point. How do we expect there to be no wars in the Gissele: world? right, exactly. it’s so hard to look at ourselves. At least it can feel that way, but. Being willing for me is like the beginning point. Okay. I just have to be willing. And for me, I’ve had to prioritize my time, even just to do a quick meditation, Gissele: it’s just as important as that email I gotta send orthat lecture I gotta put together. Rashi: and non I negotiative Rashi: practice. Yes, exactly. Yeah. And that’s the stage, that’s the season you’re [00:51:00] in. And I mean, I really wish I could get that time to just sit in meditation, be like, you know. Rashi: Yeah. And sometimes we just don’t get it. So. Gissele: Yeah. And that’s okay. I Rashi: mean, Gissele: it’s like you said, Gissele: the practice, the, the power of practicing in the moment I think is. Rashi: Very powerful. Gissele: Equally. Yeah, very powerful. Yeah. Rashi: Yeah. Gissele: Wow. So we’re reaching the end. I just wanted you to share where can people work with you? Gissele: Where can people find you? Anything you wanna share with the audience? Rashi: sure. So I, my website is called www.rashinayarwellness.com. And there’s an app that I have for people over there. It’s a free app. They can get download, it helps them return to who they are. And there’s a series of questions that can take them to just pause and reflect on. Rashi: And then the answer comes before there’s guidance and then there’s a specific meditation. So if people can find time to access that. And then there’s different options, you know, ways people can work with me. But I really wanna get this [00:52:00] app in as many hands as possible. I’m also writing my first book, which is called Living From Your Highest Frequency, which is, you know, love, right? Rashi: And it really talks about these lower states of. Everything that we talked about today. Yeah. And there’s tools that people can use, you know, in daily life when they don’t have time to meditate. When they don’t get that peaceful moment to themselves is to retreat within themselves on a moment to moment basis. Gissele: Mm. I love that. Rashi: Yeah. So go back to that piece because we are peace as we explored right now. So it’s the moment to moment returning back to who we are is what really can free us, can liberate us, and can really help us take bigger actions in this world. You know, without otherwise, some people can freeze and stay in anxiety for years and nothing’s happening. Rashi: So if we can live with those lower states of consciousness, but have no [00:53:00] resistance to them Gissele: mm-hmm. Then Rashi: automatically we’re in higher states of consciousness. That acceptance in itself takes us to higher places. From there, we are doing service. We are making an impact in the world without really judging ourselves because we are our biggest inner critic. Rashi: You know? So yeah. Gissele: What a perfect Gissele: way to end, because I think what you said is so, so critical, which is the minute we stop resisting something and go to acceptance, we’ve automatically shifted to something higher. Thank you so much, Rashi. You had such a great time. Gissele: Thank you for helping me remember who I really am and helping our audience as well. Please work with Rashi. Go check out her app and check out her book when it’s available. And thank you for joining us for another episode of The Love and Compassion Podcast with Gissele
We welcome back Andrew Sillifant, Solution Director at Pure Storage, for a deep dive into the concept of data gravity. We start with the traditional 2010 definition coined by Dave McCrory—that data accumulates, making it harder to move, and forcing dependent systems to cluster nearby. However, Andrew presents his core thesis, arguing that this foundational principle is no longer sufficient in a world of exploding complexity. Our conversation emphasizes the need to re-examine data gravity through a modern lens, acknowledging the massive shift to cloud computing and the proliferation of interconnected systems over the last decade. Andrew introduces five crucial dimensions that now describe data's impact: Volume, redefined by context and classification; Dependency, now accelerated by API calls, integration points, and AI agents; Criticality, which includes regulations, security, and implicit SLAs; Velocity, measured by how many functions data is used for; and Latency, complicated by geographic requirements that skew response times. These dimensions highlight how non-physical constraints, like egress fees and data sovereignty laws, create artificial friction that compounds the problem beyond sheer data size. Our discussion concludes with a new framework of five sources of data gravity that IT leaders must address: Technical Gravity (the physical component and mobility), Economic Gravity (the costs of hosting and moving data, like egress fees), Regulatory Gravity (compliance and legal restrictions), Institutional Gravity (the dependency on a small number of people who know how to manage old systems), and Measurement Gravity (budgeting and decision-making risks). Finally, Andrew connects these challenges to Pure Storage, noting how platform features like deduplication and continuous innovation are actively working to lessen the effects of data gravity for customers. To learn more, visit https://blog.purestorage.com/purely-technical/the-economics-of-data-gravity/ Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 01:05 Andrew Observations About the USA 04:19 Defining Data Gravity 07:30 Challenges Caused By Data Gravity 09:01 Real World Data Gravity Examples 17:15 Data Gravity Impact Vectors 33:02 New Dimensions of Data Gravity 40:30 Where Pure Helps with Data Gravity
Most small business owners treat marketing like throwing spaghetti at the wall. You try Facebook ads. You update your website. You send out newsletters. Sometimes it works. Mostly it doesn't. And you have no idea why. Joanna Wiebe has a different approach: treat marketing like an engineering system, not a creative guessing game. As the founder of Copy Hackers, Joanna has spent years helping businesses build what she calls a "Copy Selling System". A repeatable assembly line that moves prospects from complete strangers to paying customers. No more random tactics. No more copying what worked for someone else's business. Just a structured, measurable process that works for YOUR customers. The Fatal Flaw in Most Marketing Here's the mistake almost every small business makes: they skip straight to talking about their product features. You've got a great service. You know all the bells and whistles. So naturally, you lead with those details, right? Wrong. Your prospects aren't ready to hear about your features yet. They don't even know they have a problem you can solve. Or if they do know they have a problem, they're still exploring different types of solutions. Joanna breaks down the journey every buyer takes through five distinct stages of awareness – and your message needs to match where they are in that journey. Jump ahead too fast, and you lose them. The Single Question That Changes Everything Want to know the secret to writing copy that actually resonates? Stop making it up and start listening. Joanna's team uses a brilliantly simple system: a one-question survey that appears on confirmation pages right after someone takes action. "What was going on in your life that brought you to [action] today?" That's it. One question. But the responses? Pure gold. People tell you their exact pain points, in their own words, at the exact moment they're most optimistic about solving their problem. This isn't feedback from angry customers on their way out. This is insight from engaged prospects who just voted with their wallet. This voice-of-customer data becomes the foundation for every piece of marketing you create. You're not guessing what matters to your audience. They're telling you directly. Make Your Solution Unforgettable Here's a five-minute exercise that could transform your positioning: Name the specific problem you solve. Not a general category, the exact issue your customers face. Name the specific mechanism in your service that solves it. What's the unique approach, process, or "secret sauce" that makes your solution work? Joanna calls these your "Unique Problem Mechanism" and "Unique Solution Mechanism." When you can articulate both clearly, you create a memorable, defensible position in the market. Think about Lucky Strike's "It's toasted" or TurboTax's "Refund Calculator." These aren't just taglines – they're named mechanisms that explain exactly how the product solves a specific problem. What's yours? Why AI Makes This Even More Critical With ChatGPT and other AI tools flooding the market with generic copy, standing out has never been more important, or more difficult. AI can write copy. But it can't interview your customers. It can't identify the specific pain points that drive your buyers. It can't build a systematic process that fits your business model. That's where you come in. The businesses that win in this new landscape won't be the ones with the fanciest AI prompts. They'll be the ones with the strongest systems, the deepest customer insights, and the clearest positioning.
Can weakness really become strength? And can ordinary, imperfect people actually become Zion?
The Gospel is life-transforming because the message of Jesus transforms our hearts with truth, saves our soul when believed, and gives us an immeasurable inheritance to walk in. But unfortunately, the Gospel can get tainted and watered down with counterfeits and partial truths. In this special episode of Foundation Stones, Lead Pastor of Refuge City Church, Jim Boyd, breaks down some various forms of tainting so we can beware. Also in this message is an imperative to guard ourselves against being discipled by our own internet and social media algorithms. This is a powerful word! Support the show
Philo of Alexandria (c. 20 BCE – c. 50 CE) was a Hellenistic Jewish philosopher and mystic who lived in Alexandria, one of the great intellectual centers of the ancient world. Deeply rooted in the Hebrew Scriptures and equally fluent in Greek philosophy—especially Plato and the Stoics—Philo sought to show that true philosophy and authentic revelation were ultimately one.Philo's distinctive contribution lies in his mystical interpretation of Scripture. Reading the Torah allegorically, he taught that beneath its literal narratives lies a spiritual map of the soul's journey toward God. Biblical figures such as Abraham, Moses, and Jacob symbolize inner states of awakening, purification, and union. For Philo, the highest purpose of human life is not ethical conformity alone, but direct experiential knowledge of God. Central to his mysticism is the idea of ecstasy (ekstasis)—a state in which the soul transcends discursive thought and is lifted beyond itself into divine illumination. In this condition, the ordinary mind falls silent and the soul becomes receptive to God's presence. Philo insists that such knowledge cannot be grasped by reason or language, but is given through divine grace when the egoic self is relinquished.
Get ready to relive peak 90s sports-movie chaos, because this bonus episode of No More Late Fees is all about flying V energy, childhood crushes, and extremely serious hockey opinions. Jackie and Danielle are back on the ice, revisiting The Mighty Ducks trilogy in a way only they can—equal parts reverent, chaotic, and lovingly unhinged.In this episode, the hosts break down Mighty Ducks lore through a full-on character draft, pulling players from all three films and debating who truly deserves a roster spot. Along the way, they dig into behind-the-scenes trivia, character arcs that aged surprisingly well (and some that absolutely did not), and the emotional weight of growing up on Disney Channel sports movies. Expect hot takes on Julie “The Cat,” Adam Banks' undeniable talent, Bash Brothers strategy, and why Iceland may be the most iconic villain team of the era.The conversation also veers into broader nostalgia, touching on kids sports movies that shaped an entire generation, questionable accents, rich-kid antagonists, and the cultural pipeline from The Mighty Ducks to modern sports fandom. With witty commentary, playful arguments, and deep-cut references, this episode feels like hanging out with friends who know way too much about 90s movies—and are proud of it.If you grew up rewinding VHS tapes, yelling “quack, quack” at the TV, or forming lifelong opinions about fictional hockey teams, this episode is for you. Be sure to subscribe to No More Late Fees, leave a review, and share the episode with a fellow Duck who still knows all the words to the theme music.Keywords: Mighty Ducks podcast, No More Late Fees, 90s movie podcast, Disney sports movies, Mighty Ducks trivia, Mighty Ducks character analysis, nostalgic movie podcast, millennial pop culture, kids sports films—No More Late Fees https://nomorelatefeespodcast.com909-601-NMLF (6653)—Follow Us on Social:Instagramhttps://www.instagram.com/nomorelatefees TikTokhttps://www.tiktok.com/@nomorelatefees Facebookhttps://www.facebook.com/nomorelatefeesYoutubehttps://www.youtube.com/@nomorelatefees Twitterhttps://x.com/NoMoreLateFees —CONQUERingmyconquering.com10% Off Code: JACKIE10—NostaBeautyhttps://nostabeauty.com 20% Off Code: NMLF—Previous EpisodesBASEketballhttps://nomorelatefeespodcast.com/episode/baseketballSports Comedies with Coach Ronhttps://nomorelatefeespodcast.com/episode/sports-comedies-with-coach-ronOut Coldhttps://nomorelatefeespodcast.com/episode/out-coldBest Winter Y2K Movieshttps://nomorelatefeespodcast.com/episode/best-winter-y2k-movies-from-slope-laughs-to-icy-disasters