Acquisition of knowledge, skills, and competencies as a result of teaching or practice
POPULARITY
Categories
This week, we discuss Wall Street's software-stock sell-off and a viral essay on X about the potential for widespread job displacement from A.I. Then, the New York Times reporter Alexandra Alter walks us through the process that a growing number of writers are adopting to churn out romance novels with help from A.I. chatbots. Finally, we each share one bit of good tech-related news — a new way to make playlists on Spotify and progress toward decoding whale sounds. Guest:Alexandra Alter, a New York Times reporter covering books and publishing. Additional Reading:The Dark Side of A.I. Weighs on Tech StocksMatt Shumer's essay “Something Big Is Happening”The New Fabio Is ClaudeHow a New A.I. Tool Fixed My Single Biggest Problem With SpotifyHow A.I. Trained on Birds Is Surfacing Underwater Mysteries We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
What if lasting energy and better health didn't require complicated routines or constant stress? In this episode, Dr. Debbie Ozment, DDS, shares her refreshingly simple approach to enhancing vitality, preventing disease, and creating sustainable wellness habits that truly work. As the host of the Vitality Made Simple podcast, Dr. Ozment focuses on early detection, prevention, and practical strategies that help people feel their best at every stage of life. With decades of experience in dentistry and integrative health, she highlights how oral health, inflammation, toxins, and emotional stress can quietly drain energy and impact long-term wellbeing — and what you can do about it. In this conversation, we explore: · How small, consistent lifestyle changes can extend your vitality span · The connection between oral health, inflammation, and chronic disease prevention · Simple, stress-free ways to support mental, emotional, and physical wellness Dr. Ozment has been in private dental practice since 1985 and is a graduate of the University of Oklahoma College of Dentistry. She later earned a Master's degree in Metabolic and Nutritional Medicine from the University of South Florida Morsani College of Medicine and is a Diplomate of the American Academy of Anti-Aging Medicine. Trained at the Mayo Clinic and certified as a National Board-Certified Health and Wellness Coach, she brings a truly integrative perspective to modern health. Follow Dr. Ozment on Instagram @drdebbieozment to stay up to date with her latest insights and resources. Episode also available on Apple Podcasts: https://apple.co/38oMlMr Keep up with Debbie Ozment socials here: Facebook: https://www.facebook.com/drdebbieozment/ Youtube: https://www.youtube.com/@drdebbieozment
Take2, an AI agents platform purpose-built for healthcare recruiting, today announced it has raised a $14 million Series A… The company's first agent, the AI Interviewer, conducts phone interviews with candidates 24/7, evaluates them, records calls, and syncs results directly into applicant tracking systems — all with no human-in-the-loop. Trained on healthcare-specific hiring data, the platform helps organizations assess candidates more accurately while generating predictive insights that improve hiring quality and long-term retention. “Healthcare systems are under enormous pressure, and hiring is one of their biggest hidden costs,” said Yaniv Shimoni and Kaushik Narasimhan, co-founders of Take2. “We're building AI agents that actually do the work — not just assist — so recruiting teams can focus on strategic decisions instead of time-consuming manual processes.” https://hrtechfeed.com/take2-raises-14m-series-a-to-automate-healthcare-recruiting/ “Help me find an engineering job in Chicago…” A simple prompt like that can open a world of opportunity for a job seeker. Says Indeed They just announced an expansion of their relationship with OpenAI, bringing Indeed's massive job marketplace directly into the ChatGPT experience. https://hrtechfeed.com/indeed-now-integrated-with-chatgpt/ Phenom, announced the acquisition of Be Applied, an AI‑driven cognitive assessment solution that validates candidate and employee capabilities at scale. By combining Phenom's AI with Be Applied's evidence‑based assessments, enterprises can confidently move to skills‑first hiring without sacrificing speed, quality or fairness. https://hrtechfeed.com/phenom-acquires-cognitive-assessment-platform/ Recruit Holdings just dropped its Q3 FY2025 results The U.S. labor market has been characterized as “stabilizing” or even “softer” lately, with job postings declining from their post-pandemic peaks. However, Recruit's U.S. revenue tells a different story: Revenue Surge: U.S. HR Technology revenue grew 10.1% year-over-year in dollar terms this quarter. https://hrtechfeed.com/high-tech-high-growth-recruit-holdings-u-s-engine-revs-up-in-q3/ Workday, Inc. announced that co-founder and current executive chair Aneel Bhusri is returning as chief executive officer as the company enters its next chapter, focused on leading in the rapidly evolving AI era. Carl Eschenbach is stepping down as CEO and as a member of the board after leading Workday through a period defined by global growth, an expanded industry focus, and strengthened operational discipline. He will continue to support Bhusri and the company as strategic advisor to the CEO. https://hrtechfeed.com/workday-announces-ceo-transition-as-co-founder-aneel-bhusri-returns-to-lead-the-companys-next-chapter/ Learn more about your ad choices. Visit megaphone.fm/adchoices
You're disciplined. You're committed. You show up every day and put in the work. But what happens when effort and motivation aren't delivering the results you know you're capable of? Santiago Brand is an international educator and consultant in brain mapping and neurofeedback who uses real brain data to reveal what's actually happening when people perform, stall, or burn out. Trained as both a sport and clinical psychologist, Santiago has spent over 17 years across more than 26 countries helping leaders and high performers improve focus, recover faster from stress, and perform with greater consistency—not by grinding harder, but by understanding the brain that's running the show. In this conversation, Santiago reveals why even the most driven individuals hit invisible walls. You'll discover how trauma markers and emotional dysregulation show up in brain maps, why high performers resist the truth about their own humanity, and how quantitative EEG technology turns invisible obstacles into something you can finally work with. Because once you see what your brain is doing, you can't unsee it—and that's when real transformation begins. If you've ever felt like you're doing all the right things but the breakthrough still hasn't happened, this episode shows you exactly where to look next.
Sworn to Justice (1996) was chosen by friend of the show and Patreon supporter Leigh, and is a prime example of mid-90s direct-to-video action thrillers built around martial arts credentials and late-night cable appeal. Produced by PM Entertainment — a studio known for churning out low-budget, high-concept action films — the movie was designed specifically for the booming VHS rental market rather than theatrical release. Director Paul Maslak leaned into the studio's house style: fast-paced action, neon-lit cityscapes, and a blend of crime, thriller, and exploitation elements. The film was shot quickly and economically, typical of PM's efficient production model, which prioritized practical stunts and tight schedules over polish or prestige.The production's biggest selling point was its lead, Cynthia Rothrock, already a well-established martial arts star with multiple Hong Kong and American action credits. Her real-life fighting background allowed the filmmakers to stage fight scenes with minimal doubles, keeping the choreography grounded and physical. Filming took place largely around Los Angeles, using recognizable streets and interiors to stretch the budget while maintaining a contemporary urban feel. Like many PM Entertainment titles, Sworn to Justice found its audience through home video, cable rotation, and word of mouth, eventually earning cult status among fans of 90s action cinema and martial arts B-movies. Today, it's remembered as a quintessential slice of direct-to-video action filmmaking — scrappy, stylish, and unapologetically of its era.Checkout Leigh on The Movie Vent.If you enjoy the show and would like to support us, we have a Patreon here.Referral links also help out the show if you were going to sign up:NordVPNNordPassTrailer Guy Plot SummaryA city drowning in crime… a system that's failed… and one woman who's had enough.When the law can't protect the innocent, justice goes underground. Trained to fight, driven by vengeance, and armed with nothing but her fists and her will, one relentless warrior takes the streets by storm — tearing through criminals, conspiracies, and anyone foolish enough to stand in her way.*Sworn to Justice* — no badge… no backup… no mercy.Fun FactsSworn to Justice is often categorized as an “erotic thriller meets martial arts action” hybrid, a niche genre that was surprisingly popular in the mid-1990s video market.The film was released during the peak VHS rental era, when action titles like this regularly outperformed small theatrical releases in video stores.Cynthia Rothrock performs nearly all of her own fight choreography, showcasing authentic Tang Soo Do and karate techniques rather than stylized wire work.The movie blends martial arts with noir-style detective elements, giving it a darker tone compared to Rothrock's earlier Hong Kong films.Several supporting cast members were real stunt performers, which helped make the fight scenes feel more physical and less choreographed.The film developed a late-night cable TV following on networks like USA Network and HBO Zone throughout the late 1990s and early 2000s.Rothrock fans often rank this among her most “adult-oriented” American roles, marking a tonal shift from her earlier PG-13 action vehicles.The movie features a synth-heavy 90s action score, typical of direct-to-video thrillers of the era.Collectors consider original VHS and DVD releases of the film minor cult items within martial arts movie circles.thevhsstrikesback@gmail.comhttps://linktr.ee/vhsstrikesback
Learn more about Michael Wenderoth, Executive Coach: www.changwenderoth.comMost leaders were taught to leave their emotions at the door. Today's guest says that advice isn't just outdated — it's costly. In this episode of 97% Effective, host Michael Wenderoth sits down with Dina Denham Smith, executive coach and bestselling author of Emotionally Charged, to unpack why emotional skill is now a core leadership capability, not a “soft” add-on. Drawing on behavioral science and her work as an executive coach and strategic advisor, Dina explains why emotions are data, how leaders unknowingly perform massive emotional labor, and what it really takes to manage triggers, prevent burnout, and unlock performance. As Dina puts it: “Emotions are money.” By the end of this conversation, you'll see why ignoring emotions is bad for you and bad for business – and what to do instead.SHOW NOTESDina's story — and why this work mattersOne surprising thing about Dina you won't find on the internetHow Emotionally Charged would have helped Dina earlier in her own careerWhat sparked Dina's interest in the science of emotionsHow the pandemic and technology shifts dramatically increased the emotional demands placed on leadersCore ideas from Emotionally ChargedThe key takeaway: Emotions are information“Emotions are money”: how feelings directly translate into performance, retention, and resultsThe biggest myth Dina wants to retire: that emotions get in the way of good business decisionsWhat “emotional labor” really means — and why research shows leaders perform as much of it as customer service professionals (and in more complex ways)The three layers of every emotion: physiology, cognition, and behaviorWhy suppressing emotions is like trying to hold beach balls underwater Practical tools you can use immediatelyBeach balls, masks, and “letting it all hang out”: finding the right balance at workWhy expanding your emotional vocabulary dramatically improves self-regulationDina's BRAVE framework for managing triggers in real time: Breathe, Refocus, Accept, Verbalize, Engage Restoration (not “self-care”): four evidence-based ways leaders recover from emotional strain: Detachment, Relaxation, Mastery, Control Power, leadership, and team cultureWhy leaders consistently underestimate their emotional impactHow power amplifies everything you feel and showWhy everyone cues off their leader's emotional signals (often unconsciously)How leaders can normalize emotional expression on their teams — without turning meetings into complaint sessionsSimple ways managers can reset emotional culture inside their own sphere of influenceDina's reminder: emotional skills are learnable — and improvable at any stage of your career. BIO AND LINKSDina Denham Smith is an executive coach and strategic advisor who helps senior leaders build their capacity, scale their impact, and thrive in complexity. For more than a decade, she has partnered with executives at some of the world's most successful companies, helping them navigate the demands of operating at the highest levels. Dina holds an MS in Industrial/Organizational Psychology and an MBA from the Ross School of Business at the University of Michigan, and she is credentialed by both the ICF and EMCC as an executive and team coach. A prolific thought leader, Dina has published more than 60 articles on leadership for Harvard Business Review, Fast Company, Forbes, and other premium outlets. She is the lead author of Emotionally Charged: How to Lead in the New World of Work (Oxford University Press, 2025).Connect with DinaWebsite: https://dinadsmith.comLinkedIn: https://www.linkedin.com/in/dina-denham-smith/Her book: https://dinadsmith.com/book/ People and Books ReferencedDr. Alicia Grandey — Dina's co-author https://psych.la.psu.edu/people/aag6/Why We Sleep by Matthew Walker https://a.co/d/07CbSJAYMore from 97% EffectiveMichael's Award-winning Book: Get Promoted: What You're Really Missing at Work That's Holding You Back: https://tinyurl.com/453txk74Watch this episode on YouTube: https://www.youtube.com/@97PercentEffectiveAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Today’s Bible Verse: “For I myself am a man under authority, with soldiers under me. I tell this one, ‘Go,’ and he goes; and that one, ‘Come,’ and he comes. I say to my servant, ‘Do this,’ and he does it.” — Matthew 8:9 Matthew 8:9 highlights a powerful moment of faith from an unexpected source. A Roman centurion recognized something many others missed — Jesus’ authority didn’t depend on physical presence. He understood that if Jesus gave the word, it was as good as done. “Want to listen without ads? Become a BibleStudyTools.com PLUS Member today: https://www.biblestudytools.com/subscribe/ MEET YOUR HOST: Chaka Heinze at https://www.lifeaudio.com/your-daily-bible-verse/ Chaka Heinze is a writer, speaker, and lover of the Bible. She is actively involved in her local church on the Prayer and Healing team and mentors young women seeking deeper relationships with God.After personally experiencing God's love and compassion following the loss of her eleven-year-old son, Landen, Chaka delights in testifying to others about God's unfathomable and transformative love that permeates even the most difficult circumstances.Chaka and her husband of twenty-six years have five children ranging from adult age to preschool. Trained as an attorney, she’s had the privilege of mitigating sibling disputes for twenty-plus years.Follow her on Chakaheinze.com. This episode is sponsored by Trinity Debt Management. If you are struggling with debt call Trinity today. Trinity's counselors have the knowledge and resources to make a difference. Our intention is to help people become debt-free, and most importantly, remain debt-free for keeps!" If your debt has you down, we should talk. Call us at 1-800-793-8548 | https://trinitycredit.org TrinityCredit – Call us at 1-800-793-8548. Whether we're helping people pay off their unsecured debt or offering assistance to those behind in their mortgage payments. https://trinitycredit.org Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
Have you been feeling an unexplained anxiety, a low-grade tension, or a sadness with no clear source? Do you feel perpetually “on alert,” emotionally raw, or disconnected from joy even when your personal life seems stable? You are not broken, and you are not alone.In this episode of Infinite Life, Infinite Wisdom, Susan Grau addresses the palpable yet often nameless weight so many are carrying. This isn't about politics or prediction. It's a compassionate exploration of what it means to be a sensitive, perceptive human when the collective consciousness itself feels dysregulated.Susan explains how widespread grief, fear, and unresolved trauma create an atmosphere of “ambient suffering” that our nervous systems, especially those of empaths, healers, and caregivers, cannot help but absorb. This leads to symptoms like chronic anxiety, irritability, emotional numbness, brain fog, and exhausting mental loops as our systems search for closure and safety that the external world cannot provide.Moving beyond spiritual bypassing, Susan offers a practical and somatic path back to yourself. She reframes anxiety not as a thought problem, but as a nervous system signal. She redefines confusion not as incompetence, but as the necessary “space between stories” when old maps no longer fit. The core of healing, she reveals, lies in one vital shift, moving from the question “Is this story true?” to “Is this story regulating my nervous system?”This episode is a gentle, firm guide to empowerment through inner authority. It's about learning to discern what energy is yours to carry and what belongs to the collective, and how to release the latter without guilt. Susan provides actionable anchors, like pausing your internal narrative, regulating your body, and shrinking your timeframe to the present moment, to help you reclaim your grounding, your peace, and your power.In This Episode:[00:00] Introduction [01:24] Collective emotional overwhelm[02:38] Dysregulation and self-regulation[03:38] Ambient suffering and emotional reactivity[06:12] Nervous system and collective stress[07:27] Symptoms of overload and disconnection[08:24] Anxiety: body vs. mind[10:30] Confusion as a developmental stage[12:40] The stories we tell ourselves[14:50] Survival stories and letting go[16:58] Regulating the nervous system[19:01] Empowerment without bypassing[20:17] Personal vs. collective anxiety[22:36] Grounding and sensation of safety[24:06] Returning to self: practical solutions[25:42] The limits of control and self-relationship[28:55] ConclusionNotable Quotes[01:19] "Nothing's wrong with you. And I don't mean that in a motivational way. I mean it in a psychological way. In an emotional way."[02:59] "We don't need to bury our heads in the sand... But how do we self-regulate so that we can handle what's going on?"[08:19] "Our nervous system needs closure, and when it doesn't have it, it plays a loop in our brains."[11:21] "Confusion is the space between the stories."[11:38] "The old map no longer fits, but the new map hasn't been drawn yet."[16:27] "A thought can feel absolutely true and still not be truth."[21:38] "You are not meant to metabolize the entire world. You are allowed to release what does not belong to you."[22:43] "Safety is not an idea. It's a sensation."[19:01] "Empowerment doesn't have to be loud.."[29:02] "The only control you have is over you. And that is the scariest comment, isn't it?"[30:08] "Don't run from fear. Shake hands with it, face it and look it in the eye."Susan GrauSusan Grau is an internationally celebrated intuitive life coach, a key opinion leader, author, medium and speaker, who discovered her ability to communicate with the spirit world after a near-death experience at age four. Trained by Dr. Raymond Moody, James Van Praagh, and Lisa Williams, Susan is a Reiki Master, hypnotherapist, and grief therapist. Her new book, "Infinite Life, Infinite Lessons," published by Hay House, explores healing from grief and the afterlife. With media coverage in GOOP, Elle, and The Hollywood Reporter, Susan's expertise extends to podcasts, radio shows, and documentaries. She offers private mediumship readings, life path guidance, reiki sessions, and hypnotherapy, aiding individuals in healing and finding spiritual guidance.Resources and LinksInfinite Life, Infinite Wisdom Podcast Infinite Life, Infinite WisdomSusan GrauWebsiteOrder FacebookInstagramYouTubeTikTokMentionedInfinite Life, Infinite Lessons Wisdom from the Spirit World on Living, Dying, and the In-Between by Susan GrauSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of The Dairy Podcast Show, Dr. Karun Kaniyamattam from Texas A&M University breaks down how decision modeling and artificial intelligence can support real-world dairy management. He shares practical examples of how data streams, genetic indices, and modeling tools can improve disease control, labor efficiency, and long-term profitability without adding unnecessary complexity. Listen now on all major platforms."Decision modeling helps evaluate trade-offs that occur when balancing animal health, productivity, environmental responsibility, and farm profitability."Meet the guest: Dr. Karun Kaniyamattam is an Assistant Professor of Livestock Data Analytics and Artificial Intelligence at Texas A&M AgriLife Research. Trained as a veterinarian, he focuses his work on decision modeling, artificial intelligence, and sustainable dairy cattle systems. His research integrates animal health, economics, and production to support better farm-level decisions. Liked this one? Don't stop now — Here's what we think you'll love!What you'll learn:(00:00) Highlight(01:43) Introduction(12:16) Decision modeling(14:45) Systems thinking(17:43) Computing power(19:50) AI definition(23:47) Surprising insights(28:01) Final three questionsThe Dairy Podcast Show is trusted and supported by innovative companies like:* Priority IAC* Adisseo* Agri-Comfort* Jones-Hamilton Co.* Lallemand* CowManager* Afimilk* Evonik- BoviSync- Berg + Schmidt- Natural Biologics- Agrarian Solutions- AHV- dsm-firmenich- Protekta- DietForge
This method is what helped Rad and I (Yani) recover from multiple injuries and maximise our athletic performance well into adulthood …And it's the reason our clients out-perform most professional personal trainers.▶️ Rad and I created a free deep-dive coaching videoWatch now — and see why this simple shift accelerates strength and mobility gains.
Homes That Heal | Transform Your Home Into a Health and Wellness Sanctuary
Ep 87 | If you've been feeling anxious, burned out, or disconnected from your body, this episode is a gentle reminder that healing doesn't have to be complicated. Often, real healing begins by returning to the basics—supporting the nervous system with nourishment, nature, and simple daily rhythms.Jen sits down with Colleen Rathbun, board-certified wellness coach and registered nurse, to explore nervous system regulation, holistic mental health, and how food and environment impact anxiety, stress, and emotional well-being. You'll hear how blood sugar balance, protein intake, and nervous system support can influence anxiety symptoms—and why so many people struggle when stress, nourishment, and lifestyle are out of alignment. This episode connects the dots between mind-body healing, emotional regulation, nervous system support, and natural approaches to mental health that actually work in real life.
Most leaders strive for this praise but are thrown into the deep end - responsible for up to 260 meetings a year and expected to know how to motivate teams, navigate tough conversations, and drive results with little guidance. This episode is all about becoming leadership. So if you're trying to step into a new role or even just become a better boss, this episode is for you. Today's guest is Ashley Herd, former Head of HR North America at McKinsey, national keynote speaker, and LinkedIn Top Voice who has trained over 250,000 managers.She has a new book coming out tomorrow, February 6th entitled, The Manager Method: A Practical Framework to Lead, Support, and Get Results (February 10, 2026 // Hay House), she helps managers at every level lead with confidence, navigate challenges, build strong teams, and avoid burnout.Topics we explore in this episode: Major challenges that managers faceWhy you should challenge yourself to become a better leader. Shouldn't real world experience be enough?How a career quilt, a collection of career experiences, can shape you as a leader.How the “Pause–Consider–Act” framework helps managers lead with confidenceWhy leaders NEED to take time offWays a manager can stop micromanaging while keeping accountability highHow AI can make leaders more human by improving communication, time management, and connectionResources MentionedORDER ASHLEY'S BOOK, THE MANAGER METHOD: Managermethod.com/bookConnect on LinkedIn with Ashley Herd: https://www.linkedin.com/in/ashleyherd/Connect with Chris, the host: https://www.linkedin.com/in/chris-villanueva-cprw/Let's Eat, Grandma — Resume writing services to help you stand out with clarity and confidence. Hosted on Acast. See acast.com/privacy for more information.
The Mindful Healers Podcast with Dr. Jessie Mahoney and Dr. Ni-Cheng Liang
We have been taught to wait as a measure of professionalism. We delay rest, joy, and alignment because medicine taught us that patience equals commitment. Many of us are still waiting long after training ends, hoping the system will change. This waiting can feel loyal, responsible, even virtuous. Over time, it quietly costs us our presence, our health, and our lives. PEARLS OF WISDOM • Waiting is not neutral. It often preserves systems that rely on our overfunctioning and silence. • Many of us are not waiting because it is right, but because we were trained to believe it is required. • The system is not always broken; sometimes it is functioning exactly as designed. • Agency begins when we stop waiting for permission and choose alignment, even in small ways. • Fear often shows up when we stop waiting, and fear does not mean we are wrong. Reflection Questions: Where in our lives have we normalized waiting that no longer feels aligned? What are we postponing because we believe now is not the right time? What might become possible if we stopped waiting for permission? Who benefits from our waiting, and who bears the cost? CLOSING INVITATION This conversation is not about leaving medicine. It is about staying in medicine without disappearing ourselves in the process. Many of us were trained to endure quietly and trust that relief would come later. What we are exploring instead is the possibility of choosing ourselves now, even gently and imperfectly. Coaching and retreat spaces are one way we practice this shift together. Not to fix ourselves, but to remember that our lives matter now, not someday. We are allowed to live full lives alongside meaningful work. If coaching, a retreat, or an intentional pause feels supportive, notice what comes up when you consider not waiting. Often, the only thing standing between us and alignment is the permission we can give ourselves. Find out about 1:1 coaching with Dr. Jessie Mahoney: Learn about Jessie's small group coaching programs: www.jessiemahoneymd.com/group-coaching Join Jessie at Nicaiso Creek Farm CME Wellness Retreats for Women Physicians or Jessie & Ni-Cheng at the COED Connect in Nature Mindfulness Retreat at Green Gulch Farm and Zen Center. www.jessiemahoneymd.com/retreats *Nothing shared in the Healing Medicine Podcast is medical advice. Other useful links to explore: • National Academy of Medicine – Clinician Well-Being https://nam.edu/initiatives/clinician-resilience-and-well-being/ • University of Arizona Integrative Medicine https://integrativemedicine.arizona.edu
Welcome to episode #1022 of Thinking With Mitch Joel (formerly Six Pixels of Separation). At a moment when organizational change is too often treated as a mandate rather than an experience people choose to embrace, Phil Gilbert has spent his career proving that transformation only sticks when it earns genuine buy-in. Phil is a design executive, transformation leader and former General Manager of Design at IBM, where he architected one of the largest cultural and operational shifts in corporate history, helping nearly 400,000 employees across 180 countries become more entrepreneurial, agile and customer-centered. Trained as both a designer and systems thinker, Phil brought design thinking out of studios and into the core of enterprise decision-making, reshaping how teams collaborated, how products were built, and how leaders understood their customers. His work at IBM addressed hard truths, including the company's struggles with usability and missed opportunities in the early cloud era, by treating change itself as a product worthy of rigor, investment, and care. That experience became the foundation for his book Irresistible Change - A Blueprint For Earning Buy-In And Breakout Success, which blends narrative and field guide to show how large organizations can scale transformation by focusing on people, practices, and environments rather than slogans or top-down directives. Phil's approach reframes culture as an outcome, not an initiative, arguing that lasting change emerges when employees see themselves in the future being designed. Beyond IBM, his work as an executive coach and advisor continues to focus on how leaders navigate complexity, align teams, and thoughtfully integrate technologies like AI into human systems without eroding trust or creativity. Grounded in real-world execution rather than theory, Phil's perspective challenges organizations to stop forcing change and start making it irresistible. Enjoy the conversation… Running time: 1:02:49. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Thinking With Mitch Joel. Feel free to connect to me directly on LinkedIn. Check out ThinkersOne. Here is my conversation with Phil Gilbert. Irresistible Change - A Blueprint For Earning Buy-In And Breakout Success. Follow Phil on LinkedIn. Chapters: (00:00) - Introduction to Phil Gilbert and His Journey. (01:26) - IBM's Transformation and Challenges. (04:17) - The Shift from Technology to Product. (10:55) - Implementing Design Thinking at IBM. (16:30) - Cultural Change and Its Impact on Outcomes. (22:53) - The Role of Teams in Transformation. (26:40) - Branding the Change: Hallmark Program. (32:22) - The Importance of Team Selection in Transformation. (34:59) - Creating Demand for Change. (37:23) - Agency and Team Resilience. (38:06) - IBM's Market Position and Transformation. (41:14) - The Shift in Work Dynamics. (44:46) - Rethinking Office Spaces. (48:58) - Irresistible Change and Transformation Failures. (53:51) - AI Integration and Market Forces. (59:38) - The Impact of Design Thinking on Business.
Love the episode? Send us a text!What happens when a breast surgeon becomes a breast cancer patient—and then faces a second diagnosis years later?In this deeply personal and illuminating episode of Breast Cancer Conversations, host Laura Carfang is joined by Dr. Anne Peled, a board-certified breast, reconstructive, and plastic surgeon who has treated thousands of patients—and also navigated her own early-stage breast cancer diagnosis, followed years later by a new primary DCIS diagnosis.Together, Laura and Dr. Peled unpack what patients are rarely told about DCIS (stage zero breast cancer), the difference between recurrence and a second primary cancer, and how advances in surgery are transforming survivorship—including sensation-preserving mastectomy.This conversation bridges clinical expertise and lived experience, offering clarity, compassion, and permission to choose the path that aligns with your body and values.In this episode: What DCIS really is—and why “stage zero” can be misleadingRecurrence vs. second primary breast cancer: why biology mattersLumpectomy vs. mastectomy and why survival outcomes are often the sameHow guilt and self-blame show up after a second diagnosisBeing diagnosed with breast cancer as a physicianNavigating treatment when your colleagues are your caregiversThe evolution of oncoplastic surgery and patient-centered careWhy loss of breast sensation is under-discussed—but life-changingHow sensation-preserving mastectomy worksWhat questions to ask your surgeon about sensation, nerves, and recoveryMaking decisions based on your priorities—not fear or pressureAbout today's guestDr. Anne Peled is a board-certified plastic, reconstructive, and breast surgeon in private practice in San Francisco and Co-Director of the Sutter Health California Pacific Medical Center Breast Cancer Center of Excellence. Trained at Amherst College, Harvard Medical School, and UCSF, Dr. Peled completed a unique fellowship combining breast oncologic surgery and reconstruction.Her clinical and research work focuses on oncoplastic surgery, preserving and restoring sensation after mastectomy, improving patient outcomes, and breast cancer risk reduction. She is also a breast cancer survivor herself, bringing rare dual insight to patient care. Support the showLatest News: Become a Breast Cancer Conversations+ Member! Sign Up Now. Join our Mailing List - New content drops every Monday! Discover FREE programs, support groups, and resources! Enjoying our content? Please consider supporting our work.
Pain and hardship are unavoidable in a broken world—but how we interpret them makes all the difference. In Hebrews 12:4–13, we're reminded that suffering is not a sign of God's absence, anger, or rejection. Instead, for the Christian, hardship is often God's loving discipline—His purposeful training as a faithful Father. God uses pain to shape His children, confirm our belonging, grow us in holiness, and produce lasting fruit. When hardship tempts us to grow weary or quit the race, Scripture calls us to take heart, submit trustingly to our Father, and keep running with endurance. The same God who ordained suffering for His Son also raised Him from the grave, reassuring us that He will faithfully finish His work in us.
“Time of Useful Consciousness “ (The aviation term for the time between when the oxygen cuts out, and the pilot is still conscious…) Caroline welcomes astro mytho colleague Judith Tsafrir, as we weave powerful testimony from Aliya Rahman, and Renee Good's brothers, Luke & Brent Ganger, rousing music – with the increasing rapidly arising dangers – with the descriptive & guiding astrological narrative of effective strategy. The Good Medicine of Bad Bunny Super Bowl, the Monks and Aloka. In Trickster We Trust … Judy Tsafrir, MD is a physician, shamanic practitioner, and guide in the work of healing and human development. Trained in adult and child psychiatry and psychoanalysis, and a longtime Harvard Medical School faculty member, she brings an integrative approach that bridges depth psychology, holistic medicine, and spiritual wisdom. Judy draws on shamanism, astrology, the Tarot, Reiki, and intuitive practices, alongside her medical and psychoanalytic training, to support healing at emotional, physical, and spiritual levels. She is the author of Sacred Psychiatry: Bridging the Personal and Transpersonal to Transform Health and Consciousness., and her work is grounded in the belief that healing arises through the integration of heart, mind, body, and spirit—and that personal healing is inseparable from the healing of our communities and planet. https://www.JudyTsafrirMD.com The post Time of Useful Consciousness appeared first on KPFA.
In a world where confidence is rewarded and humility can feel like a liability, Stanford Law professor Robert MacCoun argues for something radical: fewer unwavering opinions, more critical reflection, and a better way to disagree. On Stanford Legal, MacCoun joins co-hosts Pamela Karlan and Diego Zambrano for a conversation about how “habits of mind” borrowed from science can help citizens, lawyers, and policymakers think more clearly and function more effectively in a pluralistic society.MacCoun is the James and Patricia Kowal Professor of Law at Stanford Law School, a professor by courtesy in Stanford's Psychology Department, and the university's senior associate vice provost for research. Trained as a social psychologist, his work sits at the intersection of law, science, and public policy, with decades of research on decision-making, bias, and the social dynamics that shape how evidence is interpreted. In the episode, he draws on his most recent book, Third Millennium Thinking: Creating Sense in a World of Nonsense, co-authored with Nobel Prize–winning physicist Saul Perlmutter and philosopher John Campbell, to explain why probabilistic thinking, intellectual humility, and what he calls an “opinion diet” are essential tools for modern civic life. Links:Robert MacCoun >>> Stanford Law pageThird Millennium Thinking >>> Stanford Law pageConnect:Episode Transcripts >>> Stanford Legal Podcast WebsiteStanford Legal Podcast >>> LinkedIn PageRich Ford >>> Twitter/XPam Karlan >>> Stanford Law School PageDiego Zambrano >>> Stanford Law School PageStanford Law School >>> Twitter/XStanford Lawyer Magazine >>> Twitter/X (00:00:00) Introduction and Noise vs. Bias(00:04:42) The Power of Probabilistic Thinking(00:12:20) Juries, Community Judgment, and Reasonable Doubt(00:13:23) Habits of Community(00:25:08) Motivation, Tools, and Decision Processes(00:26:14) When Evidence Won't Settle It Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Episode SummaryPotent in vitro hits often fail in vivo—Martin Marro details how robust assay choice and pathway deconvolution can revive GPCR drug discovery programs.Listeners will learn practical approaches to assay development for GPCR drug discovery, the pitfalls of calcium readouts, and how identifying pathway bias impacts translational success. Dr. Marro shares his experience bridging in vitro–in vivo gaps, refining selection flowcharts, and leveraging pharmacology research to drive clinical candidates. His strategic perspective is rooted in years of leading multimodal discovery teams in pharma and biotech. Key TakeawaysAssay selection critically shapes the trajectory from hit to clinic.Calcium and IP1 assays may not predict in vivo efficacy for all Gq-coupled receptor targetsAlternative pathway analysis may be essential for mechanism elucidation.Persistence in probing beyond standard readouts can rescue high-profile discovery programs. Team structure and collaborative problem-solving are pivotal in resolving translational bottlenecks.Explore Dr. GPCR Resources- Dr. GPCR Ecosystem- Membership & Pricing- Weekly NewsExplore the full depth of GPCR resources, events, and member-exclusive tools with Dr. GPCR Premium.About the GuestDr. Martin Marro leads the Cell Pharmacology group in the DOCTA division at Lilly's Seaport Innovation Center in Boston, MA. Trained as a pharmacologist, Dr. Marro has accumulated over 20 years of experience spanning large pharmaceutical firms—including GSK, Novartis, and Lilly—and innovative biotech such as Tectonic Therapeutic. He holds deep expertise in early drug discovery across small molecules, peptides, and antibody therapeutics for metabolic, cardiovascular, and gastrointestinal diseases.Dr. Marro's research has been central to the discovery and characterization of multiple clinical candidates, with a focus on GPCR target validation, receptor pharmacology, and translational assay strategies. He played a key role in patenting and developing novel fatty acid-conjugated GLP-1 receptor agonists. Driven by the challenge of translating robust in vitro science into clinical proof-of-concept, Dr. Marro's leadership continues to impact the field of GPCR drug discovery.Keywords: gpcr podcast, assay development, pharmacology research.
The Krach Institute is throwing open the doors to its Tech Diplomacy Academy, aiming to train leaders worldwide on the technologies reshaping security and freedom. We'll dig into why tech diplomacy has become essential, and how free societies can stay ahead as the stakes rise with the CEO of the Krach Institute for Tech Diplomacy at Purdue, Michelle Giuda.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Sponsored by Fidei Email:https://www.fidei.emailSources:https://www.returntotradition.orgorhttps://substack.com/@returntotradition1Contact Me:Email: return2catholictradition@gmail.comSupport My Work:Patreonhttps://www.patreon.com/AnthonyStineSubscribeStarhttps://www.subscribestar.net/return-to-traditionBuy Me A Coffeehttps://www.buymeacoffee.com/AnthonyStinePhysical Mail:Anthony StinePO Box 3048Shawnee, OK74802Follow me on the following social media:https://www.facebook.com/ReturnToCatholicTradition/https://twitter.com/pontificatormax+JMJ+#popeleoXIV #catholicism #catholicchurch #catholicprophecy#infiltration
On today's episode of The Wholesome Fertility Podcast, I'm joined by Dr. Christina Bjorndal ( @drchrisbjorndal), a naturopathic doctor, author, and mental health expert, for a powerful and deeply honest conversation about healing beyond diagnoses. Dr. Christina shares her personal journey through depression, bipolar disorder, suicide attempts, and recovery, and how those experiences shaped her integrative approach to mental health and whole-person healing. Together, we explore the profound connection between the mind, body, nervous system, and spirit, and why true healing requires more than symptom management. In this episode, we talk about how trauma, nutrition, gut health, stress, and unexamined thought patterns can influence mental health and fertility. Dr. Christina also explains why labels can be limiting, how epigenetics shapes our health beyond genetics, and why cultivating self-compassion and safety in the body is foundational to healing. This conversation offers hope, perspective, and practical insights for anyone navigating mental health challenges, fertility struggles, or the pressure to "fix" themselves instead of understanding what their body is asking for. Key Takeaways: Mental health diagnoses describe symptoms, not root causes Healing requires addressing the nervous system, not just the mind Thoughts directly influence physiology through stress hormones and immune function Epigenetics explains how environment, trauma, and lifestyle shape health outcomes Digestion and nutrition depend on nervous system regulation, not just food choices Self-compassion and self-acceptance are essential, not optional, for healing Hope and possibility are powerful forces in recovery and fertility journeys Guest Bio: Dr. Christina Bjorndal ( @drchrisbjorndal), ND is a naturopathic doctor, mental health advocate, gifted speaker, and best-selling author who blends clinical expertise with lived experience. Drawing from her personal journey through depression, anxiety, bulimia, bipolar disorder type 1, cancer, and surviving multiple suicide attempts, she offers a deeply compassionate and integrative approach to mental health. Trained in naturopathic and mind-body medicine, Dr. Christina focuses on whole-person healing, addressing the nervous system, nutrition, trauma, and self-compassion. She is the author of Beyond the Label: 10 Steps to Improve Your Mental Health with Naturopathic Medicine and the creator of two educational programs supporting both individuals and clinicians in moving beyond mental health labels. Connect with Dr. Christina Bjorndal: Website: https://drbjorndal.comInstagram: @drchrisbjorndalFacebook: Dr. Christina BjorndalTwitter/X: @drbjorndalYouTube: Christina Bjorndal Disclaimer: The information shared on this podcast is for educational and informational purposes only and is not intended as medical advice. Please consult with your healthcare provider before making any changes to your health or fertility care. Ready to discover what your body needs most on your fertility journey? Take the personalized quiz inside The Wholesome Fertility Journey and get tailored resources to meet you exactly where you are: To find out more about our Fertility Coaching Certification Program, click here: https://www.michelleoravitz.com/thewholesomefertilitymethodcertification https://www.michelleoravitz.com/the-wholesome-fertility-journey For more about my work and offerings, visit: www.michelleoravitz.com Curious about ancient wisdom for fertility? Grab my book The Way of Fertility: https://www.michelleoravitz.com/thewayoffertility Join the Wholesome Fertility Facebook Group for free resources & community support: https://www.facebook.com/groups/2149554308396504/ Connect with me on social: Instagram: @thewholesomelotusfertilityFacebook: The Wholesome Lotus
We are standing at a turning point in the hobby.Before we rush into what comes next, this episode slows things down and looks back at the Panini era. Not to praise it. Not to tear it down. To understand it.Panini did more than release products. It reshaped how collecting looks, feels, and moves. From Prizm changing modern card design, to product ladders, chase mechanics, and a release calendar that never stopped, this era trained collectors in ways many of us did not notice in real time.This episode breaks down what actually happened during the Panini run. How the hobby expanded. How attention replaced intention. How rarity, desire, and visibility became tangled. And why understanding this era gives you more control as we head into the next one.Get your free copy of Collecting For Keeps: Finding Meaning In A Hobby Built On HypeStart your 7 day free trial of Stacking Slabs Patreon Today[Distributed on Sunday] Sign up for the Stacking Slabs Weekly Rip Newsletter using this linkFollow Stacking Slabs: | Twitter | Instagram | Facebook | Tiktok ★ Support this podcast on Patreon ★
In this powerful episode, Debra Pascali-Bonaro sits down with reproductive health coach and mom of three, Liya Teplitsky, to explore how mindset, nutrition, and intentional pleasure turned birth from excruciating pain into ecstatic orgasmic bliss. Liya shares her three home birth stories—from a long, painful first labor on a yoga mat to a painless second birth and finally an intentional orgasmic birth supported by a vibrator, midwife, and an unshakable belief that birth can be pleasurable. You'll hear how Liya: -Discovered the concept of home and orgasmic birth from a chance encounter in Las Vegas and the film "Orgasmic Birth." -Used visioning, hypnobirthing, and a "no Plan B" mindset to manifest a home birth while others in her class transferred to hospital. -Crafted a controversial but powerful nutrition approach focused on animal-based fats, protein, and bone broth to support pregnancy, birth, and postpartum. -Experienced a pain-free, midwife-free second birth at 37 weeks where her husband caught the baby before the midwife arrived. -Trained her body and mind for an intentional orgasmic third birth using a vibrator, spiritual work, and radical clarity about her desires. -Shifted her beliefs around age, physiology, and "biblical" ideas of suffering in childbirth, choosing instead to pursue pleasure as a birthright. This episode is for you if: -You're curious or skeptical about orgasmic birth and want to hear a detailed, grounded story. -You're preparing for birth and want to explore mindset, manifestation, and physiology as tools for a different experience. -You support birthing people and want a deeper understanding of how nutrition, beliefs, and sexual energy shape labor. Connect with Liya Teplitsky: Liya currently works one-on-one as a mentor and nutritional coach, supporting women with fertility challenges and those preparing for natural conception and birth. She offers guidance in multiple languages and shares more about her nutritional approach, pregnancy preparation, postpartum nourishment, and mom life, Website - https://liyateplitsky.com Facebook - / 61572499001960 Instagram - / liyatep Purchase the PleasureVibe Buy Now and Access BONUS Resources http://orgasmicbirth.com/fin-pleasure... Review and follow the show—we'd love to hear how this episode inspired you! Connect with Debra! Website: https://www.orgasmicbirth.com Instagram: / orgasmicbirth X: / orgasmicbirth YouTube / orgasmicbirth1 Tik Tok / orgasmicbirth LinkedIn: / debra-pascali-bonaro-1093471 ----
What happens when a board-certified medical doctor discovers energy healing—and realizes science and spirituality have been saying the same thing all along? Medical doctor Ana Baptista, MD (hematologist, 18 years experience) bridges Western medicine and energy healing. Discover the neuroscience behind spinal energetics, why your heart is a second brain, real healing stories (chronic pain resolved in one session), and the emotional roots of disease. Learn about co-regulation, alignment, and why science and spirituality are finally collaborating. For anyone seeking deeper healing or curious about energy medicine from a scientific perspective. IN THIS EPISODE: [00:00] Introduction to Finding Harmony Podcast [01:00] Meet Ana Baptista: Medical Doctor and Energy Practitioner [03:00] The Impact of Unconscious Patterns on Health [05:00] Ana's Medical Background and Shift to Alternative Medicine [07:00] Growing Up with Science and Open-Minded Family [10:00] Discovering Communication Gaps in Medicine [11:00] Integrating Coaching and NLP into Medical Practice [14:00] Discovering Spinal Energetics [15:00] Experiencing Energy Work: Ana's First Session [17:00] The Science Behind Mind-Body Connection [19:00] Your Mind as Your "Claws and Teeth" [22:00] The Heart as a Second Brain [26:00] The Role of Intuition in Medicine [29:00] The Evolution of Medical Practice: Intuition and Science [32:00] The Future of Medicine: Integrating Science and Ancient Wisdom [40:00] Science Meets Energy Healing [43:00] Embracing AI as an Assistant [44:00] The Role of the Nervous System [46:00] Science and Human Potential [50:00] Spinal Energetics and Transformation [54:00] Midlife Crisis and Purpose [59:00] Healing Through Emotional Release (Real Case Studies) [1:04:00] The Interconnection of Mind and Body [1:08:00] Disease and Emotional Roots [1:17:00] Alignment: Spine, Soul, and Self [1:21:00] Where to Find Ana Baptista GUEST BIO: Ana Baptista, MD is a board-certified hematologist with over 18 years of medical experience. She has worked in emergency medicine, specialized consultations, and served as medical director for clinical trials in hematology and oncology. Trained in Portugal, Ana also holds certifications in coaching, neurolinguistic programming (NLP), and clinical hypnotherapy. After discovering spinal energetics, she now integrates energy medicine with her medical background, helping clients heal through nervous system regulation and embodied practices. Ana is passionate about bridging Western medicine with alternative healing modalities, proving that science and spirituality complement rather than contradict each other. CONNECT WITH ANA: Website: supportingpaths.com Instagram: @supportingpaths Location: Based in the Algarve, Portugal | Works online globally KEY TAKEAWAYS: Your mind is your evolutionary survival mechanism—like claws and teeth for humans The heart has its own neural network and can sense magnetic fields independently Energy work is your nervous system releasing stored tension and trauma Chronic pain can resolve rapidly when the body feels safe to release Autoimmune diseases may be connected to patterns of self-criticism Midlife crisis is your purpose asking if you're aligned with your truth Medicine is an art informed by science, not just science alone • Intuition is your nervous system processing faster than conscious thought Disease often has emotional roots that Western medicine doesn't address Alignment (spine, soul, life) is the key to reducing suffering Science and energy medicine are complementary, not contradictory RESOURCES MENTIONED: "You Can Heal Your Life" by Louise Hay • Gabor Maté's work on trauma and disease • Spinal Energetics (as healing modality) • NLP (Neurolinguistic Programming) • Clinical Hypnotherapy FIND Harmony online: https://harmonyslater.com/ Harmony on IG: https://www.instagram.com/harmonyslaterofficial/ Finding Harmony Podcast on IG: https://www.instagram.com/findingharmonypodcast/ FREE Manifestation Activation: https://harmonyslater.kit.com/manifestation-activation
This week, Jaansi sits down with healthcare journalist Kristen V. Brown to explore the role of science journalism in shaping public trust, health decision-making, and collective understanding. Together, they discuss how culture and politics influence scientific research, why women's health remains underfunded, and how journalists can responsibly report on polarized topics like vaccines without deepening mistrust. Join us to learn about the essentials for communicating complex science in an increasingly fragmented media landscape!Kristen V. Brown is a healthcare journalist currently working on a book about fertility. As a Hearst Fellow at the Albany Times Union and the San Francisco Chronicle, a staffer at Bloomberg and Gizmodo, and a Webby nominated podcaster, she has covered women's health and the cultural forces that shape medical knowledge. Trained in arts and culture journalism, Brown brings a humanities-driven lens to science reporting and has written extensively on vaccines, reproductive health, and consumer genetics.Check out Kristen's work: www.kristenvbrown.com/
Reese Jones is living every San Fransico tech guy's wet dream. Create a company, sell it to Motorola for $205 millions dollars, and meet a hot, blonde girlfriend who doesn't hold back in the bedroom. A lifestyle some would be jealous of even after Reese gets kidnapped. Three men jump out, blindfold him, force him into a car at gunpoint. Next thing he knows, Reese is being led through seven different rooms, representing the seven deadly sins. One is lust. Another is gluttony. Then, envy. Reese is bound to a chair while his girlfriend has intercourse with what is described as ‘a buffet of people.' After all seven rooms, all seven sins, Reese is reborn. Which just means he's now cloaked in white, standing on a rooftop deck while his blonde girlfriend waits for him in the distance: “Happy Birthday.” That's what you get as a present when you're worth $200 million dollars and your girlfriend is the founder of One Taste, a company that helps women meditate and reach an orgasm. Every tech guy's wet dream right? That's until Reese gets wrapped up in one of the strangest, potential trafficking cases, and his girlfriend, Nicole Daedone, wellness company CEO ends up in the same prison as none other than Ghislaine Maxwell. Full show notes available at RottenMangoPodcast.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
It's been two months since the federal government began what it calls “Operation Metro Surge” in Minnesota. Besides spreading fear amongst immigrants and many documented instances of violence and racial profiling, the surge has led many Minnesotans to jump into a remarkably large network of advocates, lawyers, constitutional observers and mutual aid providers. While these helpers have made headlines worldwide, many are getting tired. The Immigrant Defense Network has been operating beyond its capacity for weeks, and there's not yet an end in sight.The Immigrant Defense Network helped band together more than 100 organizations to assist struggling families and defend immigrants' constitutional rights. In January, the network registered an average of 2,000 volunteers per week to deliver food, give at-risk families rides, go to court hearings, and translate documents.“The scale is unimaginable,” Edwin Torres Desantiago, Immigrant Defense Network manager, said. “We have rapid response around the clock, seven days a week. We are actively responding to a case every six minutes across the state of Minnesota.”Torres Desantiago said that to many staff and volunteers, their work feels like a nonstop sprint. “A lot of us are tired, but we know that in this moment we need to keep defending and protecting our neighbors.”“We are living with the reality that this is no longer a couple-week operation like it was in other cities,” Torres Desantiago said. “We are now expecting and creating the infrastructure that this is something we have to sustain for an unforeseeable future.”Torres Desantiago said that even if the there was a sudden decrease in ICE agents in the state, his organization would still work around the clock for months to help with the ripple effect the operation has had on tens of thousands of Minnesotans.
JOIN THE 7 DAY RESET - ▶️ www.therebuiltman.com/7dayreset For decades, the porn industry has sold men a lie. A lie of freedom. A lie of self-expression. A lie that promised pleasure—but delivered dependence. In this episode, Coach Frank Rich pulls the curtain back on the porn industry and exposes how it was built, how it manipulates the male brain, and why so many men feel trapped in a cycle they don't understand. This isn't about shame. This isn't about morality. This is about truth, clarity, and taking your power back. In this episode, you'll learn: How porn evolved from taboo content into a global, data-driven industry The psychological and neurological tactics used to keep men hooked Why porn trains the brain to crave novelty over real connection The hidden human cost behind the screen that's rarely talked about Why "sexual freedom" is the most dangerous lie the industry sells How awareness is the first step to real, lasting freedom If you've ever wondered why you keep going back—even when you hate it—this episode will give you clarity most men never get. And once you see it, you can't unsee it.
Tonight, my Haunted Hearts, I'm honored to welcome Ryan Kralik. Ryan Kralik is an author and independent researcher exploring how information shapes the structure of reality, from physics and biology to consciousness and culture. His work draws on systems thinking, contemporary physics, and cognitive science to examine how informational processes underlie physical phenomena and give rise to meaning, perception, and continuity across domains. His writing and research have been featured in Ancient Origins, Greek Reporter, Aperture Magazine, and across a range of programs focused on cosmology, anomalies and foundational questions about reality.His forthcoming book, It From Us – An Information-First Framework and the Purpose of Consciousness, presents a model in which matter, life, and awareness emerge from information organizing itself toward coherence. Trained in remote viewing and engaged for decades in the study of anomalous phenomena, Kralik brings a disciplined, analytical approach to topics such as UAPs, psi research, and consciousness—seeking an integrated framework that bridges empirical inquiry and human experience without reducing either.Ryan's Links:Website: https://itfromus.com/
A talk by Thanissaro Bhikkhu entitled "The Well-trained Mind"
Send us a textSam Manchulenko is a seasoned yoga instructor and spiritual guide based out of Winnipeg, Canada. She is renowned for her expertise in facilitating yoga teacher trainings and workshops that integrate yoga philosophy with practical spiritual tools. Trained under prominent spiritual figures such as Dharma Mittra, Byron Katie, and Eckhart Tolle, Sam offers unique insights into yoga and personal development. She specializes in psychic development, intuitive empowerment, and blending various philosophical teachings to help individuals achieve internal harmony and mindfulness.Visit Sam: https://www.samtheyogi.com/Key Takeaways:Integration of Practice: Sam highlights the impact of practicing yoga alongside spiritual guides and emphasizes the importance of embodying compassion and curiosity.Yoga and Dance Synergy: Discover how both yoga and dance facilitate mindfulness, presence, and the subtle art of offering oneself to a greater purpose.Philosophical Insights: Explore the power of detachment and the unconditional love that comes from accepting challenges and darker emotions.Psychic Development: Learn about psychic development or intuitive empowerment and how it focuses on attuning to one's inner vibrations to manifest positivity.Thanks for listening to this episode. Check out:
Aging is not something Zoltan Istvan plans to accept quietly. He wants to treat death like a technical bug, rewrite the rules of biology, and turn California into the global test bed for radical human upgrades. From cyborg implants to AI driven longevity science, this episode explores what happens when a candidate for governor openly argues that humans should evolve beyond their biological limits and take control of how long and how well they live. Watch this episode on YouTube for the full video experience: https://www.youtube.com/@DaveAspreyBPR Host Dave Asprey sits down with Zoltan Istvan, a leading transhumanist, futurist, longevity advocate, and current candidate for Governor of California. Zoltan has spoken at Parliaments and Senates around the world, appeared on The Joe Rogan Experience, consulted for the US Armed Forces, and served as a correspondent for The New York Times. He has addressed the World Bank, the World Economic Forum, and the UK Parliament, and his work has influenced world leaders while shaping global conversations on AI, liberty, and human enhancement. Trained in philosophy and ethics at Columbia University and the University of Oxford, Zoltan brings rare depth to the intersection of technology, biology, and governance. Together, they explore whether aging should be classified as a disease, why regulation is slowing breakthroughs in longevity science, and how California could become ground zero for anti-aging innovation. They debate biology versus machine integration, open source technology versus centralized control, and what morphological freedom really means when enhancement technologies move faster than policy. The discussion spans mitochondria, neuroplasticity, brain optimization, stem cells, organ printing, implants, and the ethical risks of surveillance, algorithmic persuasion, and unchecked AI. This episode is essential listening for anyone serious about biohacking, hacking human performance, longevity, metabolism, functional medicine, anti-aging strategies, supplements, nootropics, ketosis, fasting, carnivore frameworks, sleep optimization, and living Smarter Not Harder in a world increasingly shaped by AI and technology, ideally with a cup of Danger Coffee in hand. You'll Learn: • Why aging may be a solvable problem rather than an unavoidable fate • How politics and regulation influence access to longevity and anti-aging therapies • The real tradeoffs between biological upgrades and machine integration • Why mitochondria, neuroplasticity, and brain optimization matter in human enhancement • How AI and surveillance technology threaten cognitive and biological autonomy • What morphological freedom means for the future of medicine and personal choice • Why open source approaches to biohacking could protect liberty and innovation • How Smarter Not Harder strategies support longevity in a rapidly evolving world Dave Asprey is a four time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade is the top podcast for people who want to take control of their biology, extend their longevity, and optimize every system in the body and mind. Each episode features cutting edge insights in health, performance, neuroscience, supplements, nutrition, hacking, emotional intelligence, and conscious living. Thank you to our sponsors! BEYOND Conference 2026 | Register now with code DAVE300 for $300 off at https://beyondconference.com/ MASA Chips | Go to https://www.masachips.com/DAVEASPREY and use code DAVEASPREY for 25% off your first order. GOT MOLD? | See what's in your air and save 10% with code DAVE10 at http://gotmold.com/shop EMR-Tek | Get 40% off EMF protection with code DAVE at https://www.emr-tek.com/DAVE Dave Asprey is a four-time New York Times bestselling author, founder of Bulletproof Coffee, and the father of biohacking. With over 1,000 interviews and 1 million monthly listeners, The Human Upgrade brings you the knowledge to take control of your biology, extend your longevity, and optimize every system in your body and mind. Each episode delivers cutting-edge insights in health, performance, neuroscience, supplements, nutrition, biohacking, emotional intelligence, and conscious living. New episodes are released every Tuesday, Thursday, Friday, and Sunday (BONUS). Dave asks the questions no one else will and gives you real tools to become stronger, smarter, and more resilient. Keywords: transhumanism podcast, human cyborg future, biohacking transhumanism, longevity technology podcast, anti-aging technology, human enhancement podcast, cyborg implants future, AI human evolution, aging as a disease, radical longevity science, human performance future, brain optimization technology, mitochondria longevity science, neuroplasticity enhancement, biohacking longevity politics, California longevity policy, morphological freedom body, human augmentation debate, AI risk humanity, surveillance technology health, open source biohacking, stem cell longevity future, organ printing technology, functional medicine future, metabolism longevity science, ketosis fasting longevity, nootropics brain optimization, supplements longevity science, carnivore diet longevity, sleep optimization performance, Dave Asprey transhumanism, Zoltan Istvan podcast, futurist longevity interview, governor cyborg policy, technology immortality debate Resources: • Learn More About Zoltan's Work At: https://zoltanistvan.com/ • Get My 2026 Biohacking Trends Report: https://daveasprey.com/2026-biohacking-trends-report/ • Join My Low-Oxalate 30-Day Challenge: https://daveasprey.com/2026-low-ox-reset/ • Dave Asprey's Latest News | Go to https://daveasprey.com/ to join Inside Track today. • Danger Coffee: https://dangercoffee.com/discount/dave15 • My Daily Supplements: SuppGrade Labs (15% Off) • Favorite Blue Light Blocking Glasses: TrueDark (15% Off) • Dave Asprey's BEYOND Conference: https://beyondconference.com • Dave Asprey's New Book – Heavily Meditated: https://daveasprey.com/heavily-meditated • Upgrade Collective: https://www.ourupgradecollective.com • Upgrade Labs: https://upgradelabs.com Timestamps: 0:00 – Introduction 3:51 – What Is Transhumanism 8:15 – Biology vs Technology 12:53 – Government & Regulation 20:43 – Running for Governor 26:13 – Social Media & Kids 30:18 – Life Extension & Upgrades 38:59 – Defining Humanity 46:12 – Consciousness & Uploading 49:27 – Religion & Society 58:02 – AI Existential Risk 1:02:57 – Space & Future Enhancement See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This one's tough.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Welcome to the Minority Mindset Show! Want more financial news? Join Market Briefs, my free daily financial newsletter: https://link2.briefs.co/gie Below are my recommended tools! Please note: Yes, these are our sponsors & advertisers. However, these are companies that I trust and use (or have used). The compensation doesn't affect my recommendations or advice. That being said, you should always do your own research & never blindly listen to a random guy on YouTube (or podcast). ---------- ➤ Invest In Stocks Passively 1) M1 Finance - Buy stocks & ETFs automatically: https://theminoritymindset.com/m1 ---------- ➤ Life Insurance 2) Policygenius - Get a free life insurance quote: https://theminoritymindset.com/policygenius ---------- ➤ Real Estate Investing Online 3) Fundrise - Invest in real estate with as little as $10! https://theminoritymindset.com/fundrise ----------
Recorded by K. A. Hays for Poem-a-Day, a series produced by the Academy of American Poets. Published on January 29, 2026. www.poets.org
The number of Immigration and Customs Enforcement agents has doubled over the past year -- driven by a massive recruitments campaign. Who the new recruits are and how they're being trained. *** Thank you for listening. Help power On Point by making a donation here: www.wbur.org/giveonpoint
Today’s Bible Verse: “When he hesitated, the men grasped his hand and the hands of his wife and of his two daughters and led them safely out of the city, for the Lord was merciful to them.” — Genesis 19:16 (NIV) Genesis 19:16 is a powerful picture of God’s mercy in motion. Even when Lot hesitated, unsure or slow to respond, the Lord’s compassion did not waver. God didn’t wait for perfect faith or flawless obedience—He stepped in and led them to safety. “Want to listen without ads? Become a BibleStudyTools.com PLUS Member today: https://www.biblestudytools.com/subscribe/ MEET YOUR HOST: Chaka Heinze at https://www.lifeaudio.com/your-daily-bible-verse/ Chaka Heinze is a writer, speaker, and lover of the Bible. She is actively involved in her local church on the Prayer and Healing team and mentors young women seeking deeper relationships with God.After personally experiencing God's love and compassion following the loss of her eleven-year-old son, Landen, Chaka delights in testifying to others about God's unfathomable and transformative love that permeates even the most difficult circumstances.Chaka and her husband of twenty-six years have five children ranging from adult age to preschool. Trained as an attorney, she’s had the privilege of mitigating sibling disputes for twenty-plus years.Follow her on Chakaheinze.com. This episode is sponsored by Trinity Debt Management. If you are struggling with debt call Trinity today. Trinity's counselors have the knowledge and resources to make a difference. Our intention is to help people become debt-free, and most importantly, remain debt-free for keeps!" If your debt has you down, we should talk. Call us at 1-800-793-8548 | https://trinitycredit.org TrinityCredit – Call us at 1-800-793-8548. Whether we're helping people pay off their unsecured debt or offering assistance to those behind in their mortgage payments. https://trinitycredit.org Discover more Christian podcasts at lifeaudio.com and inquire about advertising opportunities at lifeaudio.com/contact-us.
In this episode of Leadership Insider, Paul reflects on one of the most overlooked realities of leadership today: leadership is emotional, whether we acknowledge it or not. Drawing on more than four decades of experience working with leaders, teams, and global organisations, he explores how many leadership problems remain hidden in plain sight because they are misdiagnosed, avoided, or oversimplified.Key themes explored in this episode:Why leadership is shaped more by emotional climate than strategy or structureHow leaders often mistake control, compliance, or engagement metrics for real leadershipThe difference between managing obedience and cultivating ownershipWhy people disengage when they feel unseen, unheard, or unsafeHow emotional blind spots quietly erode trust, culture, and performanceWhy the most damaging leadership issues are often the most obvious onesHow misdiagnosing problems leads to repeated fixes that never workPaul also shares real-world observations from working inside organisations — including luxury brands — showing how leaders can become disconnected from the lived experience of their people, and how this disconnect affects morale, loyalty, and results.Key takeaway:Leadership improves when leaders stop managing symptoms and start paying attention to the emotional reality of the people they lead.To continue the conversation, you can stay connected with Paul through the Leadership Insider podcast, where he shares further reflections on leadership, culture, and the human side of work.
The Steve Gruber Show | Funded, Trained, Coordinated: The Truth About ICE ‘Protests' --- 00:00 - Hour 1 Monologue 18:57 – Jonathan Feldstein, Founder and President of the Genesis 123 Foundation. Feldstein addresses claims that Christian Zionism is a “harmful and damaging” ideology. He explains what Christian Zionism actually is and why he believes it plays a vital role in faith, history, and geopolitics. 27:36 – Natalie Dominguez, Title Theft Education Specialist for Home Title Lock. Dominguez shares real-life cases where families lost their homes due to title theft and explains why protecting your home is essential. Visit HomeTitleLock.com and use promo code GRUBER for a free title history report and a free 14-day trial of Million Dollar TripleLock Protection. 37:35 - Hour 2 Monologue 46:21 – Derringer Dick, Strategic Research Associate at Becket. Dick breaks down a new survey showing all-time high public support for religious freedom. He explains what's driving the trend and why it matters in today's legal and cultural landscape. 56:08 – Steve Bucci, Visiting Fellow at The Heritage Foundation. Bucci explains why President Trump's strikes in Nigeria are strategically significant. He discusses terrorism, regional stability, and U.S. national security interests. 1:04:47 – Bobby Khan, congressional candidate for Nevada's 1st Congressional District. Khan shares his remarkable personal story, including how he once appeared on the FBI's Most Wanted list. He explains how that past led him to where he is today and why he's now running for Congress. 1:14:37 - Hour 3 Monologue 1:23:27 – Rep. Joe Aragona, representing Michigan's 60th District in Clinton Township. Aragona exposes the Rx Kids program for allegedly funneling millions in taxpayer dollars to Michigan State University and a New York nonprofit. He discusses accountability and misuse of public funds. 1:33:08 – Maya MacGuineas, President of the Committee for a Responsible Federal Budget. MacGuineas explains what a fiscal crisis would actually look like in the United States. She outlines warning signs, economic consequences, and what policymakers should be doing now. 1:41:49 – Ivey Gruber, President of the Michigan Talk Network. Gruber breaks down the latest shooting in Minneapolis and discusses what may have happened. The conversation focuses on how these tragedies are often avoidable, the dangers of social media-driven narratives, and the importance of facts, compliance, and survival. --- Visit Steve's website: https://stevegruber.com TikTok: https://www.tiktok.com/@stevegrubershow Truth: https://truthsocial.com/@stevegrubershow Gettr: https://gettr.com/user/stevegruber Facebook: https://www.facebook.com/stevegrubershow Instagram: https://www.instagram.com/stevegrubershow/ Twitter: https://twitter.com/Stevegrubershow Rumble: https://rumble.com/user/TheSteveGruberShow
Superpowers for Good should not be considered investment advice. Seek counsel before making investment decisions. When you purchase an item, launch a campaign or create an investment account after clicking a link here, we may earn a fee. Engage to support our work.Watch the show on television by downloading the e360tv channel app to your Roku, LG or AmazonFireTV. You can also see it on YouTube.Devin: What is your superpower?Richard: The superpower is to see the truth that we're all made in the image of God…underneath all of the apparent polarization.The world feels increasingly divided, yet Richard Flyer believes we can create a more united, symbiotic culture by shifting our perspective. During today's episode, Richard explained his compelling vision for a community built on intentional mutual benefit—a concept that resonates deeply with me.Richard's new book, Birthing the Symbiotic Age, is the culmination of over two decades of work, blending personal experience, community organizing, and a belief in the interconnectedness of humanity and nature. He challenges the idea that we are separate, saying, “We're actually all connected…within our families, neighborhoods, local communities, nations, and worldwide.”This intentional mutual benefit, as Richard describes it, is a culture where every action, thought, and decision considers its impact on others. It's about making connection a core value, from small personal interactions to global systems. Richard explained, “Symbiotic culture…is a culture in which intentional mutual benefit between human beings and with nature becomes the norm at all scales.”He draws from practical experience, sharing stories of community transformation. Richard recounted his involvement in initiatives like the Nevada Micro-Enterprise Initiative, which provided low-income entrepreneurs with seed funding, mentorship, and technical assistance. These efforts exemplify his belief that mutual benefit can underpin economic and social systems, creating a “virtuous economy.”This vision aligns beautifully with the principles of impact crowdfunding, where investors and entrepreneurs unite to create positive change. Richard's work shows how embedding intentional mutual benefit into our economy has the power to transform not only individual lives but entire communities.Richard's book, Birthing the Symbiotic Age, offers a roadmap for rebuilding our culture with love and connection at its heart. As he said, “When we engage the world, we are coming from that deeper connected perspective.”For those interested in this vision, Richard's book is available at richardflyer.com. By embracing his ideas, we can take steps toward realizing this symbiotic age together.tl;dr:Richard Flyer shares a 20-year journey to create a symbiotic culture of intentional mutual benefit.He explains how his book, Birthing the Symbiotic Age, challenges the myth of separation in society.Richard highlights community-building efforts, including crime reduction and micro-financing initiatives.He describes his superpower: recognizing the intrinsic divinity or goodness in every individual.Richard provides actionable advice for fostering connection and building a culture of mutual benefit.How to Develop Recognizing the Divinity in Others As a SuperpowerRichard's superpower is the ability to see the divinity—or intrinsic goodness—in everyone. He explained, “The superpower is to see the truth that we're all made in the image of God…underneath all of the apparent polarization.” This perspective allows him to bridge divides and unite communities, focusing on the shared humanity that connects us all. Richard emphasized that this principle applies universally, regardless of one's spiritual or secular beliefs, making it a powerful tool for fostering connection and collaboration.Richard shared a transformative story of overcoming his personal biases to unite his community. In Reno, Nevada, he recognized his antipathy toward religious organizations was limiting his ability to include them in community-building efforts. To address this, he spent a year visiting various religious and spiritual groups, from Christian churches to Buddhist sanghas. This experience helped him see individuals beyond their labels, fostering greater understanding and collaboration. This shift enabled him to unite diverse groups to address shared challenges.Tips for Developing the Superpower:Attend events hosted by organizations or people you may disagree with to foster understanding.Practice small, intentional acts of kindness, such as holding the door open for others.Consciously remind yourself of the shared humanity in everyone, even those with opposing views.Reflect on personal biases and take steps to overcome them for greater connection.By following Richard's example and advice, you can make recognizing the divinity in others a skill. With practice and effort, you could make it a superpower that enables you to do more good in the world.Remember, however, that research into success suggests that building on your own superpowers is more important than creating new ones or overcoming weaknesses. You do you!Get Your Copy!Guest ProfileRichard Flyer (he/him):Symbiotic Culture - more a framework at this point, not an organizationAbout Symbiotic Culture: Symbiotic Culture is a civic and cultural framework focused on rebuilding trust, belonging, and cooperation at the local level in a time of social fragmentation. It integrates insights from community development, economics, spirituality and faith traditions, and living systems to help people move beyond polarization toward shared purpose and practical collaboration. Rather than advancing ideology or top-down solutions, Symbiotic Culture emphasizes connecting the good already present in local communities—linking people, initiatives, and institutions so they can work together more effectively through shared values and virtues such as trust, mutual responsibility, and care. The work holds that lasting social renewal is both practical and spiritual, beginning not with systems alone but with people learning how to live, work, and solve problems together in meaningful ways.Website: richardflyer.comBiographical Information: Richard Flyer is an author, community-builder, and faith-rooted cultural strategist whose life's work bridges science, spirituality, and civic renewal. Trained as a biologist, he studied pilot whale and dolphin communication at UC Santa Cruz and San Diego State before earning an M.S. in Biology. His grounding in living systems science later became the foundation for Symbiotic Culture—a framework that integrates spiritual insight with practical tools for regenerative community life.Richard's career spans health, education, and grassroots leadership. He pioneered hyperbaric oxygen therapy programs in Nevada hospitals, taught in community colleges and detention facilities, and led nonprofits including the San Diego Food Bank, Neighbors United, and the Nevada Microenterprise Initiative. Internationally, he served with Sri Lanka's Sarvodaya Shramadana movement, supporting a national network of over 5,000 communities. His work draws inspiration from Jesus and the early church, Gandhi's village republics, and Václav Benda's idea of the Parallel Polis.For Richard, following Jesus is not about dogma, but about daily practice—learning to embody love, reconciliation, hospitality, and neighborliness in a divided world. He sees in Jesus not only the center of his faith, but a bridge across traditions, calling people into deeper connection and shared responsibility.Today, through Symbiotic Culture, Richard mentors leaders across faith, civic, and cultural spheres. In Birthing the Symbiotic Age, he offers a vision for a Global Commonwealth of 50,000 empowered communities—a parallel society rooted in love, justice, and mutual flourishing. He lives on O‘ahu, Hawaii with his wife Marta, drawing renewal from the islands, time with family, and the simple joy of Connecting the Good wherever he goes.LinkedIn Profile: linkedin.com/in/richard-flyer-6820727Personal Twitter Handle: @Richard_Flyer Personal Facebook Profile: facebook.com/richard.flyerInstagram Handle: @richard.flyerSupport Our SponsorsOur generous sponsors make our work possible, serving impact investors, social entrepreneurs, community builders and diverse founders. Today's advertisers include Crowdfunding Made Simple, and Make Money with Impact Crowdfunding. Learn more about advertising with us here.Max-Impact Members(We're grateful for every one of these community champions who make this work possible.)Brian Christie, Brainsy | Cameron Neil, Lend For Good | Carol Fineagan, Independent Consultant | Hiten Sonpal, RISE Robotics | John Berlet, CORE Tax Deeds, LLC. | Justin Starbird, The Aebli Group | Lory Moore, Lory Moore Law | Mark Grimes, Networked Enterprise Development | Matthew Mead, Hempitecture | Michael Pratt, Qnetic | Mike Green, Envirosult | Dr. Nicole Paulk, Siren Biotechnology | Paul Lovejoy, Stakeholder Enterprise | Pearl Wright, Global Changemaker | Scott Thorpe, Philanthropist | Sharon Samjitsingh, Health Care Originals | Add Your Name HereUpcoming SuperCrowd Event CalendarIf a location is not noted, the events below are virtual.SuperCrowd Impact Member Networking Session: Impact (and, of course, Max-Impact) Members of the SuperCrowd are invited to a private networking session on January 27th at 1:30 PM ET/10:30 AM PT. Mark your calendar. We'll send private emails to Impact Members with registration details.Community Event CalendarSuccessful Funding with Karl Dakin, Tuesdays at 10:00 AM ET - Click on Events.Join UGLY TALK: Women Tech Founders in San Francisco on January 29, 2026, an energizing in-person gathering of 100 women founders focused on funding strategies and discovering SuperCrowd as a powerful alternative for raising capital.If you would like to submit an event for us to share with the 10,000+ changemakers, investors and entrepreneurs who are members of the SuperCrowd, click here.Manage the volume of emails you receive from us by clicking here.We use AI to help us write compelling recaps of each episode. Get full access to Superpowers for Good at www.superpowers4good.com/subscribe
Did you know that gaslighting doesn't only happen in romantic relationships, but can show up in families, friendships, workplaces, spiritual spaces, and even inside your own mind long after the other person is gone?In this episode of Infinite Life, Infinite Wisdom, Susan Grau talks honestly about gaslighting, not as a buzzword, but as something that slowly teaches you to doubt yourself. Your memory. Your feelings. Your intuition. Drawing from her own experience and the work she's done with clients, Susan explains how gaslighting can leave you disconnected from your body, unsure of your truth, and constantly questioning whether you were right or wrong.She shares why the impact of gaslighting often continues long after the relationship or situation has ended, and why that ongoing self-doubt isn't a flaw. It's a survival response. Susan also talks about what healing really looks like, and it's not dramatic or confrontational. It's quiet and reparative. Learning to believe in yourself again. Reducing contact with invalidating energy. Setting clearer boundaries. Letting yourself rest, pause, and be silent without guilt.This episode is a gentle reminder that if you were gaslit, you were not weak. You were trusting, open, and human. Healing begins when you stop looking to others for closure and start choosing yourself, even when it feels uncomfortable.In This Episode:[00:00] Introduction [01:30] Gaslighting beyond romantic relationships[02:46] Defining gaslighting and its effects[04:02] Common phrases and accountability avoidance[05:22] Impact on intuition and self-trust[06:44] Personal experience and disconnection[08:57] Overriding intuition and survival response[10:10] Self-abandonment and explaining pain[11:12] Avoiding invalidating people and relearning self-care[12:18] Reparative self-care and trusting yourself[13:33] Physical responses and recognizing gaslighting[14:37] Slowing down and silence without guilt[15:49] Setting boundaries and reducing exposure[17:46] Distance as protection, not punishment[18:42] Self-love as loyalty and choosing peace[19:47] Why people stay and the role of hope[22:02] Healing feels quieter and rooted[23:13] Freedom from needing admission[24:34] The loop: replaying conversations[25:33] Closure comes from within[26:40] Breaking the loop and nervous system repair[27:47] Longer gaps and present-moment healing[28:53] Self-love means releasing the need for clarity[29:30] Conclusion Notable Quotes[03:26] “You stop asking, ‘Is this okay?' And start asking, ‘Am I okay for feeling this?'”[04:58] “Most gaslighting comes from people who cannot tolerate accountability.”[08:45] “Gaslighting separates you from your inner knowing.”[11:22] “You weren't too much. You were too aware for someone who needed control.”[12:09] “Self-care isn't a bubble bath. It's relearning how to listen to yourself without apology.”[13:24] “Your experience is valid because you lived it.”[19:15] “I choose peace over being right.”[21:18] “Staying doesn't mean you were weak. It means you were human.”[21:39] “Leaving is not a failure. It's wisdom arriving.”[25:53] “The same person who distorted your reality cannot be the one who restores it.”[27:47] “Some chapters do not end with understanding. They end with self-respect.”[28:42] “My peace does not require permission.”Susan GrauSusan Grau is an internationally celebrated intuitive life coach, a key opinion leader, author, medium and speaker, who discovered her ability to communicate with the spirit world after a near-death experience at age four. Trained by Dr. Raymond Moody, James Van Praagh, and Lisa Williams, Susan is a Reiki Master, hypnotherapist, and grief therapist. Her new book, "Infinite Life, Infinite Lessons," published by Hay House, explores healing from grief and the afterlife. With media coverage in GOOP, Elle, and The Hollywood Reporter, Susan's expertise extends to podcasts, radio shows, and documentaries. She offers private mediumship readings, life path guidance, reiki sessions, and hypnotherapy, aiding individuals in healing and finding spiritual guidance.Resources and LinksInfinite Life, Infinite Wisdom Podcast Infinite Life, Infinite WisdomSusan GrauWebsiteOrder FacebookInstagramYouTubeTikTokMentionedInfinite Life, Infinite Lessons Wisdom from the Spirit World on Living, Dying, and the In-Between by Susan GrauSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, we dive deep into the critical topic of self-deception and its profound impact on leadership and personal effectiveness. Mitch shares powerful insights on how self-deception can undermine our relationships and professional success, often without us even realizing it. He explains the concept of self-betrayal and how it leads to a distorted view of ourselves and others, creating unnecessary conflicts and reducing our influence as leaders. Mitch shares a valuable advice on how to rebuild trust in relationships damaged by self-deception and how to not let it happen again. Mitch is the co-author of Arbinger's latest bestseller, The Outward Mindset. He writes frequently on the practical effects of mindset at the individual and organizational levels as well as the role of leadership in transforming organizational culture and results. He is an expert on mindset and culture change, leadership, strategy, performance management, organizational turnaround, and conflict resolution. Mitch is a sought-after speaker to organizations across a range of industries, bringing his practical experience to bear for leaders of corporations, governments, and organizations across the globe. Specific clients include NASA, Citrix, Aflac, the U.S. Army and Air Force, the Treasury Executive Institute, and Intermountain Healthcare. Mitch carries his first-hand perspective as a proven leader into his speeches and facilitation, dynamically bringing Arbinger's concepts and tools to life through his powerful stories and hands-on experience. His audiences leave inspired to improve and equipped with a practical roadmap to effect immediate change. In his role as managing partner, Mitch directs the development of Arbinger's intellectual property, training and consulting programs, and highly customized large-scale organizational change initiatives. He has been instrumental in Arbinger's rapid growth, including its expanding international presence in nearly 30 countries. Mitch received his B.A. in philosophy and is a licensed nursing administrator. Trained in fine art at the Art Students League and the National Academy, he spends much of his free time painting. His work hangs in organizations nationwide. Visit Arbinger Institute here: https://arbinger.com/ Here are some free gifts for you: Overall Approach Used in Well-Managed Strategy Studies free download: www.firmsconsulting.com/OverallApproach McKinsey & BCG winning resume free download: www.firmsconsulting.com/resumepdf Enjoying this episode? Get access to sample advanced training episodes here: www.firmsconsulting.com/promo
Send Us Your Prayer Requests --------Thank you for listening! Your support of Joni and Friends helps make this show possible. Joni and Friends envisions a world where every person with a disability finds hope, dignity, and their place in the body of Christ. Become part of the global movement today at www.joniandfriends.org. Find more encouragement on Instagram, TikTok, Facebook, and YouTube.
In today's episode of You Can Overcome Anything! Podcast Show, cesarRespino.com brings to you a special guess from beautiful Spain.Laetitia is an internationally awarded model, movement artist, and published muse whose work bridges fine art, photography, and dance. Trained at the prestigious Paris Opera Ballet School and a former performer with Cirque du Soleil, she brings grace, precision, and theatrical depth to every frame. She has traveled to over 87 countries, published five fine art photo books, appeared on the covers of international magazines, and been immortalized in bronze sculptures such as Little Sister by Basil Watson. With multiple international accolades and a global following of over 100,000, she has built a six-figure creative career rooted in authenticity, independence, and timeless storytelling.Laetitia's message to you is:I want to motivate and inspire them to follow their heart and dreams :) ...everything is possible.To connect with Laetitia Bouffard-Roupe go to:https://laetitiamodel.com/https://www.instagram.com/laetitia_channel_model/https://www.patreon.com/laetitiamodelhttps://www.youtube.com/@laetitiamodelTo Connect with CesarRespino go to:
Hidden Killers With Tony Brueski | True Crime News & Commentary
The prosecution has surveillance footage. A ballistics match. An alleged suppressor. But if you're defending Michael McKee, the holes in this case are where you live.How did McKee allegedly enter the Tepe home with no forced entry? Prosecutors haven't explained it publicly. The aggravated burglary charge suggests they have a theory — but until they disclose it, that's a gap the defense can exploit.There's no disclosed motive. McKee and Monique divorced years ago. Police confirmed there were no prior reports from the Tepe address about McKee — no 911 calls, no restraining orders, no documented threats. No ongoing disputes. So why would a surgeon with everything to lose allegedly drive to Ohio and kill two people?Defense attorney Bob Motta analyzes the defense's options. McKee is a vascular surgeon. Intelligent. Educated. Trained in precision. The prosecution's theory requires him to allegedly commit premeditated murder, use a suppressor — and then keep the murder weapon in his own apartment. How does the defense reconcile that with the profile of a careful, calculating person?McKee "disappeared" in the months before the murders. Process servers couldn't find him. A colleague said he just vanished. The prosecution might call that consciousness of guilt. The defense might call it a man moving between jobs.Both Spencer and Monique were shot multiple times. Does the manner of the killings help or hurt the defense? Could they argue this looks more like rage than premeditation — even with the suppressor allegation? Motta breaks down the strategies available and what it would take for McKee to walk.#MichaelMcKee #MoniqueTepe #SpencerTepe #TepeMurders #BobMotta #HiddenKillers #DefenseStrategy #NoForcedEntry #ReasonableDoubt #CriminalDefenseJoin Our SubStack For AD-FREE ADVANCE EPISODES & EXTRAS!: https://hiddenkillers.substack.com/Want to comment and watch this podcast as a video? Check out our YouTube Channel. https://www.youtube.com/@hiddenkillerspodInstagram https://www.instagram.com/hiddenkillerspod/Facebook https://www.facebook.com/hiddenkillerspod/Tik-Tok https://www.tiktok.com/@hiddenkillerspodX Twitter https://x.com/tonybpodListen Ad-Free On Apple Podcasts Here: https://podcasts.apple.com/us/podcast/true-crime-today-premium-plus-ad-free-advance-episode/id1705422872This publication contains commentary and opinion based on publicly available information. All individuals are presumed innocent until proven guilty in a court of law. Nothing published here should be taken as a statement of fact, health or legal advice.
"The Gentleman Barbarian" Bruss Hamilton, a former U.S. Marine, World Strength Games u265 champion, and 2025 America's Strongest Veteran who traded the strongman field for the pro wrestling ring. Trained at the Black & Brave Wrestling Academy under Seth Rollins and Marek Brave, Bruss has become a staple of the Midwest independent scene (SCW Pro, AAW, Iron Spirit Pro, ZOWA Live, Wrestling Revolver and more) and is now heading into a WWE tryout. Off the platform he's a husband, father of six, and online coach, using his journey from undersized drama kid to world-level strength athlete to help others chase their own version of "Gentleman Barbarian" strength. Follow Bruss: Instagram: https://www.instagram.com/brusshamiltonpro X (Twitter): https://x.com/BrussHamiltonGB Patreon: https://www.patreon.com/brusshamilton (Other socials under: "Bruss Hamilton") Become an elitefts channel member for early access to Dave Tate's Table Talk podcast and other perks. @eliteftsofficial Support Dave Tate's Table Talk: FULL Crew Access - https://www.elitefts.com/join-the-crew Limited Edition Apparel https://www.elitefts.com/shop/apparel/limited-edition.html Programs & More - https://www.elitefts.com/shop/dave-tate-s-table-talk-crew.html TYAO Application - https://www.elitefts.com/dave-tate-s-tyao-application Best-selling elitefts Products: Pro Resistance Training Bands: https://www.elitefts.com/shop/bands.html Specialty Barbells: https://www.elitefts.com/shop/bars-weights/specialty-bars.html Wraps, Straps, Sleeves: https://www.elitefts.com/shop/power-gear.html Sponsors: Get an extra 10% OFF at elitefts (CODE: TABLE TALK): https://www.elitefts.com/ Get 10% OFF Your Next Marek Health Labs (CODE: TABLETALK): https://marekhealth.com/tabletalk Get a free 8-count Sample Pack of LMNT's most popular drink mix flavors: http://www.drinklmnt.com/tabletalk Support Massenomics! https://www.massenomics.com Save 20% on monthly, yearly, or lifetime - MASS Research Review (CODE ELITEFTS20): https://massresearchreview.com RP Hypertrophy App (CODE: TABLE TALK) https://rpstrength.com/pages/hypertrophy-app
In this episode of Skin Anarchy, Dr. Ekta Yadav welcomes Carolina Reis Oliveira and Alessandra Zonari, co-founders of OneSkin, for a rigorous, eye-opening conversation that reframes skincare as a true longevity intervention. Part of the Lessons in Longevity series, this episode asks a bold question: what if skin aging isn't cosmetic at all—but cellular, systemic, and deeply biological?OneSkin began not as a brand, but as a lab-based mission. Trained in stem cell biology, tissue engineering, and genomics, Carolina and Alessandra spent years growing real human skin in the lab to test existing products. What they found challenged the industry: many so-called “anti-aging” formulas increased inflammation, cellular stress, and long-term damage. Meanwhile, longevity science was accelerating—yet skin, the body's largest organ, was being left out of the conversation.At the center of the episode is cellular senescence—a state where damaged cells stop dividing and begin secreting inflammatory signals that degrade surrounding tissue. In skin, this process weakens the barrier, disrupts collagen, and accelerates visible aging. Rather than masking symptoms, OneSkin set out to target this root cause. After screening hundreds of compounds, they developed OS-01, a proprietary peptide shown in lab models to significantly reduce senescent cell burden while increasing collagen—without irritation.The conversation also expands beyond the face. OneSkin's decision to focus on body skin revealed something unexpected: improving the skin barrier may reduce systemic inflammation. Clinical data discussed in the episode suggests that healthier skin doesn't just look better—it may influence whole-body aging.This episode is a must-listen for anyone curious about where skincare, biotech, and longevity science truly intersect.Listen to the full episode of Skin Anarchy to hear how OneSkin is redefining skin as a living organ—and why the future of longevity may start at the surface.SHOP ONESKINDon't forget to subscribe to Skin Anarchy on Apple Podcasts, Spotify, or your preferred platform.Reach out to us through email with any questions.Sign up for our newsletter!Shop all our episodes and products mentioned through our ShopMy Shelf!Support the show