Podcasts about Sanjay

  • 1,495PODCASTS
  • 3,071EPISODES
  • 42mAVG DURATION
  • 1DAILY NEW EPISODE
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Sanjay

Show all podcasts related to sanjay

Latest podcast episodes about Sanjay

Coronavirus: Fact vs Fiction
The Science Behind a Broken Heart

Coronavirus: Fact vs Fiction

Play Episode Listen Later Feb 13, 2026 28:03


Love can be one of life's greatest joys and heartbreak one of its deepest pains. Sanjay talks with psychiatrist and neuroscientist Yoram Yovell about how heartbreak affects the body, why emotional pain can feel physical, and what actually helps people heal.  Producer: Kyra Dahring  Medical Writer: Andrea Kane  Showrunner: Amanda Sealy  Senior Producer: Dan Bloom  Technical Director: Dan Dzula  Executive Producer: Steve Lickteig  Learn more about your ad choices. Visit podcastchoices.com/adchoices

KSHMR - Dharma Radio
DHARMA RADIO #054

KSHMR - Dharma Radio

Play Episode Listen Later Feb 13, 2026 60:50


KSHMR drops the brand new Trap Version of "Bass Down Low" and features music from Gordo & Reinier Zonneveld, OTIOT & BEMET, ALAN SHIRAHAMA & Komb, Zafrir and many more on #DharmaRadio! OTIOT, BEMET - Sinai 00:39Bedouin, Hiya - Salaam 06:08Alvek - Rouse 11:59ASHER SWISSA, T-Puse - Halfa 16:10Zafrir - Ariana 21:00Vion Konger x Skytech - Zoom 23:23Gordo & Reinier Zonneveld - Loco Loco 27:34David Guetta vs Benny Benassi - Satisfaction (with KASIA, Moonphazes, Majewski) 31:16Alesso & Pendulum - FADE 35:42KSHMR & Sam Feldt - Pretender 39:35Panjabi MC, SANJAY, Glory Bawa - Bawa Boli 43:55Argy, Omnya - Aria (Omiki Remix) 47:40Lilly Palmer, Space 92 - Vicious Chords 51:29ALAN SHIRAHAMA & Komb - Hellhole 54:09KSHMR & MEMBA - Bass Down Low (feat. DEV) [Trap Version] 57:15

The Pritika Loonia Podcast
Serious Anxiety Symptoms You Shouldn't Avoid | Dr. Sanjay Garg | Sage Up With Pritika Ep- 31 |

The Pritika Loonia Podcast

Play Episode Listen Later Feb 13, 2026 75:14


From everyday overthinking and constant restlessness to clinical anxiety that quietly affects work, relationships, sleep, and self-worth, we break down what anxiety really is and what it is not. Dr. Garg explains why anxiety is rising so sharply today, how our mind and body stay stuck in survival mode, and the subtle signs people often ignore for years.We talk about panic attacks, health anxiety, social anxiety, overthinking loops, medication myths, therapy hesitations, and when anxiety becomes a medical concern rather than just “stress.” Most importantly, this episode focuses on clarity, not fear, how to understand your anxiety, respond to it correctly, and seek help without shame.If anxiety has ever made you feel confused, weak, or out of control, this conversation will help you see it with more compassion, logic, and hope.Dr Sanjay Garg Senior Consultant Psychiatrist Fortis Hospital, Kolkata Phone: 033 6628 4444Connect With Pritika -Podcast Related Emails - connect@pritika.coInstagram- https://www.instagram.com/pritika.looniaListen to the full podcast here - https://www.youtube.com/@PritikaLooniaOfficial Facebook - https://www.facebook.com/captainpritika/Learn From Me - www.pritika.co Listen to my podcast on - Jio saavn - https://www.jiosaavn.com/shows/sage-up-with-pritika-loonia/2/ZukCx7qhBVQ_ Spotify- https://open.spotify.com/show/7ErewAP263SgLXOUE8V0SI?si=f0c13ec52bb74062 Apple Podcast- https://podcasts.apple.com/in/podcast/sage-up-with-pritika-loonia/id151762994500:00:00-00:02:20 - Trailer00:02:21-00:08:09 - Ghabrahat anxiety hai? Symptoms kya hai?00:08:10-00:16:43- These people are more prone to "Anxiety"00:16:44-00:18:30- Special advice for WOMEN 00:18:31-00:22:32- Mujhe Anxiety hai to kya karu?00:22:33-00:32:26- These NUTRIENT deficiency could lead to Anxiety00:32:27-00:41:40- What medicine does during anxiety?00:41:41-00:47:26- Can I have STRESS even if I love my work?00:47:27-00:55:37- Psychiatrist ke paas jaau ya psychologist?00:55:38-01:02:02- What is social media doing to our mental health?01:02:03-01:15:04 Teenage with non supportive parents, kya kare?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Side Hustle to Small Business
Greg Cantori builds an accessibility business from nonprofit roots

Side Hustle to Small Business

Play Episode Listen Later Feb 11, 2026 32:17


In this episode, Sanjay speaks with Greg Cantori, founder of Little Deeds Accessibility Solutions, about how his background in the nonprofit sector led him into the construction space, and ultimately to building a growing accessibility-focused business.   Greg shares how years of working in nonprofits shaped his understanding of impact, why accessibility for older adults is both a social need and a business opportunity, and how simple home modifications, like installing shower grab bars, can dramatically improve quality of life.   What you'll learn: • How nonprofit experience can translate into strong entrepreneurial skills • Why accessibility is an increasingly important (and underserved) market • How to move from service work to a scalable business model • What it takes to expand a local business into a national service • How purpose and profitability can coexist     Chapters  00:00 Introduction 4:02 Building Little Deeds 8:51 Scaling revenue 16:30 Moving internationally while running a business 27:16 Reflecting on the business 29:22 Advice for other entrepreneurs 30:44 Closing and contact   Learn more about Little Deeds Accessibility Solutions at littledeeds.com   At Hiscox, we believe in supporting entrepreneurs who bring bold ideas and strong communities to life. Explore resources and coverage options to help protect and grow your business at Hiscox.com.   #entrepreneurship #accessibility #nonprofit

Take Command: A Dale Carnegie Podcast
Keep Sharp in Chaos: A Surgeon's Mindset Hack

Take Command: A Dale Carnegie Podcast

Play Episode Listen Later Feb 10, 2026 44:06


About the Guest:Sanjay Gupta comes from a family of trailblazers. His mother, the first woman engineer hired at Ford Motor Company—a refugee who fled India at age 5—took Dale Carnegie courses to conquer public speaking fears, making How to Win Friends and Influence People a family staple. Inspired by her grit, Sanjay pursued neuroscience early, became a White House fellow, and joined CNN just before 9/11, evolving from healthcare wonk to global reporter on wars, disasters, and outbreaks—while still operating in war zones.That's why Sanjay is CNN's Chief Medical Correspondent, a practicing neurosurgeon at Emory, a bestselling author (Keep Sharp), and a Dale Carnegie graduate. He credits the course (taken at 16–17) for turning speaking terror into TV poise for millions. Hear more about blending medicine, media, and mentorship when you listen to this episode of the Dale Carnegie Taking Commandpodcast.What You Will Learn:Insights into how family resilience shapes bold careersLessons in humility as a leadership superpower ("Say 'I don't know'—it galvanizes teams")Stories about Dale Carnegie's hacks like using names and unsolicited praise notesThe hard truth on brain health: movement grows neurons, but brisk walks beat sprints for optimal resultsJoin us for this deep dive into balancing dual careers, learning from everyone, and optimizing your mind for peak performance. Sanjay isn't just a reporter—he's a perpetual student turning lessons into action. Tune in today to learn from one of the best. Please rate and review this Episode!We'd love to hear from you! Leaving a review helps us ensure we deliver content that resonates with you. Your feedback can inspire others to join our Take Command: A Dale Carnegie Podcast community & benefit from the leadership insights we share.

Radio Record
Record Release by Tim Vox #321 (09-02-2026)

Radio Record

Play Episode Listen Later Feb 9, 2026


01. Panjabi Mc, Sanjay, Glory Bawa - Bawa Boli 02. Zhu - Black Midas 03. David Vendetta, Eray Turkay - Old School 04. Vidojean, Oliver Loenn, Future Cartel, Julimar Santos - Joga Pro Alto (The Hustle) 05. Welker - Ice In My Eyes 06. Goom Gum, Dont Blink - Hard Decision 07. Camelphat, Arodes - Cycles 08. Fcukers - L.U.C.K.Y 09. Dubvision - Run 10. Boris Brejcha - 16 Red Even 11. Fedde Le Grand, Mr V, Tony Romera - Back & Forth 12. Maesic, Kilimanjaro, Zentola - Hold It 13. Jaden Bojsen, David Guetta - Upside Down 14. Syn Cole - Stories Untold 15. Moby, Blond Ish, Kiko Franco - Natural Blues 16. Martin Garrix, Sebastian Ingrosso, Citadelle - Peace of Flood 17. Jaden Bojsen, David Guetta - Upside Down 18. Hugel, Divolly, Markward - Jump & Shout 19. Boris Way - Under Pressure 20. Nicky Romero, Monocule - Lost In Light 21. Fisher - Rain 22. Nicky Romero, Monocule, Dan Soleil - Colorful 23. Hugel, Grossomoddo, Sphynx - Hadid 24. Switch Disco, Korolova - Empty Skies 25. Damon Sharpe - Housaholic 26. Dyro, Lea Key - Happy Now 27. Don Diablo, Qobra - Remember Me 28. Catz 'N Dogz, Members Of Mayday, Wozz Lozowski - Mayday 29. Stev Dive - Out of Space 30. Chapter & Verse - Can't Get Enough 31. Firebeatz - Block Rockin 32. Cosmic Gate, Cmd_Ctrl - Need A Little Love 33. Amero, Clmd - Come Back to Me 34. Bingo Players, Vion Konger - Rattle 35. Duck Sauce - You're Nasty 36. Yves V, Chester Young, Tommy Veanud - One Of A Kind 37. Conduit, Charlotte De Witte - A Prayer for the Dancefloor

Stratagize
Mission: Unsustainable: Reacting to Sanjay's Exit

Stratagize

Play Episode Listen Later Feb 5, 2026 51:52


In this reaction episode, Brent and James unpack Sanjay Maharaj's decision to sell his strata management company and reflect on the broader questions it raises. What happens when rising costs, talent scarcity, and evolving client expectations converge? Is the current service model built to withstand the long-term pressures of the job?  Brent and James take a thoughtful look at the emotional and operational demands that led Sanjay to move on, and what leaders in the space can learn from his story. It's not a crisis, it's a moment for reflection.  Connect with Stratagize: Website Linkedin Email Connect with Brent Anderson Linkedin Connect with James Milne Linkedin

The Tech Blog Writer Podcast
Cloudinary and the Business Case for Developer-Led Product Growth

The Tech Blog Writer Podcast

Play Episode Listen Later Feb 4, 2026 27:08


How do you turn a developer-first product into a growth engine without losing trust, clarity, or focus along the way? In this episode of Tech Talks Daily, I'm joined by Sanjay Sarathy, VP of Developer Experience and Self Service at Cloudinary, for a grounded and thoughtful conversation about product-led growth when developers sit at the center of the story. Sanjay operates at a rare intersection. He leads Cloudinary's high-volume self-service motion while also caring for the developer community that fuels adoption, advocacy, and long-term loyalty. That dual perspective, part business, part builder, shapes everything we discuss. Our conversation picks up on a theme I have been exploring across recent episodes. When technical work is explained clearly, whether that is security, performance, or reliability, it stops being background noise and starts supporting growth. Sanjay shares how Cloudinary approached this from day one, starting with founders who were developers themselves and carried a deep respect for developer trust into the company's DNA. Documentation that reflects reality, platforms that behave exactly as promised, and support that shows up early rather than as an afterthought all play a part. What stood out to me was how early Cloudinary invested in technical support, even before many traditional growth motions were in place. That decision shaped a self-service experience that still feels human at scale. With thousands of developer sign-ups every day and millions of developers using the platform, Sanjay explains how trust compounds into referrals, word of mouth, and sustained adoption. We also dig into developer advocacy and why community is rarely a single thing. Developers gather around frameworks, tools, workflows, and shared problems, and Cloudinary has learned to meet them where they already are rather than forcing them into a single branded space. From React and Next.js users to enterprise advisory boards, feedback loops become part of the product itself. As AI reshapes how software is built and developer tools become more crowded, Sanjay offers a clear-eyed view on what separates companies that grow steadily from those that burn bright and stall. Profitability, experimentation with intent, and the discipline to double down on what works all feature heavily in his thinking. It is a conversation rooted in experience rather than theory. If you care about product-led growth, developer trust, or building platforms that scale without losing their soul, this episode offers plenty to think about. As always, I would love to hear your perspective too. How do you see developer communities shaping the next phase of product growth, and where do you think companies still get it wrong?

Side Hustle to Small Business
Marisol Colette on merging her two passions into one business

Side Hustle to Small Business

Play Episode Listen Later Feb 4, 2026 33:09


In this episode, Marisol Colette, founder of Sol Reflections, shares how she combined two seemingly different passions, therapy and fashion styling, into a single, transformative business. Tune in as Sanjay and Marisol discuss how she approached pricing her services, what it took to hire a full-time employee for the first time, and how she built a model that supports both personal expression and emotional wellbeing.   Whether you're growing a service-based business or trying to merge multiple passions into one career, this conversation offers practical insight and honest reflection.   What you'll learn: • How Marisol combined therapy and fashion into one aligned business • How she approached pricing when her services evolved • What founders should know before hiring their first full-time employee • The mindset shifts that help creatives build sustainable businesses • How personal identity and entrepreneurship connect   Chapters: 00:00 Introduction and background 5:30 Starting the business 13:53 Maintaining multiple businesses 20:12 Overcoming nerves 21:50 Hiring 24:55 Balancing work and life 28:35 Advice for other entrepreneurs 30:17 Reflecting on the business 31:49 Closing and contact   Learn more about Marisol Colette and Sol Reflections: solreflection.com      #entrepreneurship #smallbusiness #womeninbusiness    At Hiscox, we believe in supporting entrepreneurs who bring bold ideas and strong communities to life. Explore resources and coverage options to help protect and grow your business at Hiscox.com.

The Scuffed Soccer Podcast | USMNT, Yanks Abroad, MLS, futbol in America
#663: Poch tells Weah to can it, Wes and Malik ball out

The Scuffed Soccer Podcast | USMNT, Yanks Abroad, MLS, futbol in America

Play Episode Listen Later Feb 3, 2026 75:37


Sanjay Sujanthakumar joins Belz and Vince to talk through Poch on Weah on World Cup ticket prices, plus a big week to recap for Malik and Wes, and much more as the countdown continues.Sanjay on Twitter: https://x.com/tha_Real_Kumar Our trip to Germany and the Netherlands, spots running out: https://docs.google.com/forms/d/e/1FAIpQLSfI4Cp1VpS2eCphsNjf6QHdaRDq86Tf-FeUhJ2tQ0RzkbxQhw/viewform Skip the ads! Subscribe to Scuffed on Patreon and get all episodes ad-free, plus any bonus episodes. Patrons at $5 a month or more also get access to Clip Notes, a video of key moments on the field we discuss on the show, plus all patrons get access to our private Discord server, live call-in shows, and the full catalog of historic recaps we've made: https://www.patreon.com/scuffedAlso, check out Boots on the Ground, our USWNT-focused spinoff podcast headed up by Tara and Vince. They are cooking over there, you can listen here: https://boots-on-the-ground.simplecast.comAnd check out our MERCH, baby. We have better stuff than you might think: https://www.scuffedhq.com/store Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Side Hustle to Small Business
Rachel Lundberg leads Thrive Yoga and Wellness through community building

Side Hustle to Small Business

Play Episode Listen Later Jan 28, 2026 29:25


Rachel Lundberg is the founder of Thrive Yoga and Wellness in Oregon City, Oregon, a wellness center that aims to reconnect and nourish your body and mind. Rachel and Sanjay discuss her early journey into yoga, her approach to building a supportive and inclusive studio environment, and why community connection is essential for long-term success in the wellness industry.   What you'll learn: • How Rachel transitioned into ownership at Thrive Yoga and Wellness • Why community plays a central role in wellness-based businesses • How early personal experiences can shape an entrepreneurial path • Practical lessons for growing a mission-driven yoga or wellness studio   Chapters: 00:00 Introduction 2:18 Buying an existing business 8:15 Rachel's yoga background 10:43 Shifting roles 18:09 Successes and failures 23:57 Exercise and wellness 26:16 Reflecting on the business 27:58 Closing and contact   Learn more about Rachel and Thrive Yoga and Wellness here:  https://www.thriveyogaoc.com/       #smallbusinesss #yoga #hiscoxbusiness   At Hiscox, we believe in supporting entrepreneurs who bring bold ideas and strong communities to life. Explore resources and coverage options to help protect and grow your business at Hiscox.com.

Fluent Fiction - Hindi
Anaya's Winter Triumph: Balancing Duty and Crisis

Fluent Fiction - Hindi

Play Episode Listen Later Jan 28, 2026 15:05 Transcription Available


Fluent Fiction - Hindi: Anaya's Winter Triumph: Balancing Duty and Crisis Find the full episode transcript, vocabulary words, and more:fluentfiction.com/hi/episode/2026-01-28-08-38-20-hi Story Transcript:Hi: अनाया की बड़ी, हलचल भरी दुनिया में आज का दिन बहुत महत्वपूर्ण था।En: Today was an important day in Anaya's big, bustling world.Hi: सर्दी का मौसम अपने चरम पर था।En: Winter was at its peak.Hi: उनके घर के चारों ओर सुंदर सजावट थी, जो उन्हें सर्दियों के ठंडी दिनों में गर्माहट का एहसास दिला रही थी।En: Beautiful decorations surrounded her home, giving a sense of warmth during the cold winter days.Hi: अनाया एक मेहनती छात्रा थी, जो नए-नए घर की जिम्मेदारियाँ भी उठाने लगी थी।En: Anaya was a hard-working student, who had also started taking on new household responsibilities.Hi: हालाँकि, उसके माता-पिता अब भी उस पर नजर रखते थे, और अक्सर उसे कुछ निर्णय खुद लेने का मौका नहीं मिलता था।En: However, her parents still kept an eye on her, and often she didn't get the opportunity to make some decisions on her own.Hi: आज अनाया को एक विशेष अध्ययन समूह में ऑनलाइन शामिल होना था, जहाँ उसे अपने दोस्तों के साथ मिलकर एक महत्वपूर्ण परियोजना पर काम करना था।En: Today, Anaya had to join a special online study group where she was going to work with her friends on an important project.Hi: उसके छोटे भाई, रिहान, ने सुबह से थोड़ी अस्वस्थता महसूस की थी, लेकिन अनाया को लगा कि यह कुछ मामूली एलर्जी होगी।En: Her younger brother, Rihaan, had been feeling slightly unwell since morning, but Anaya thought it was a minor allergy.Hi: उनके माता-पिता किसी जरूरी काम से बाहर गए थे और अनाया को अपने भाई की देखभाल खुद करनी थी।En: Her parents had gone out for a necessary task, and Anaya had to take care of her brother by herself.Hi: दो पहर के समय, अनाया ने देखा कि रिहान की तबीयत बिगड़ रही थी।En: By the afternoon, Anaya noticed that Rihaan's condition was deteriorating.Hi: उसके चेहरे पर लाल चकत्ते होने लगे और सांस में दिक्कत महसूस हो रही थी।En: He began to have red rashes on his face and was experiencing difficulty in breathing.Hi: अनाया के मन में चिंता की लहर दौड़ गई।En: A wave of worry rushed through Anaya's mind.Hi: उसे अपने अध्ययन समूह की बैठक में भी शामिल होना था और रिहान की देखभाल भी करनी थी।En: She had to attend her study group meeting and also take care of Rihaan.Hi: अनाया ने अपने बचपन के दोस्त संजय को फोन किया।En: Anaya called her childhood friend, Sanjay.Hi: संजय को एलर्जी के बारे में अच्छी जानकारी थी।En: Sanjay had good knowledge about allergies.Hi: संजय से बात करते हुए अनाया ने परिवार के डॉक्टर को भी संपर्क किया।En: While talking to Sanjay, Anaya also contacted the family doctor.Hi: संजय तुरंत उनके घर पहुंचा और डॉक्टर के निर्देशानुसार रिहान को दवा दी।En: Sanjay promptly reached their home and administered medication to Rihaan as per the doctor's instructions.Hi: धीर-धीरे रिहान की हालत स्थिर होने लगी।En: Gradually, Rihaan's condition began to stabilize.Hi: अनाया को भारी राहत मिली।En: Anaya felt immense relief.Hi: इस बीच, उसने अपने लैपटॉप को तैयार किया और ऑनलाइन अपने अध्ययन समूह से जुड़ गई।En: Meanwhile, she prepared her laptop and joined her online study group.Hi: उसने अपनी परियोजना पर ध्यान केंद्रित किया और सभी दायित्वों का उचित निर्वाह किया।En: She focused on her project and adequately fulfilled all her responsibilities.Hi: इस घटना के बाद, अनाया ने महसूस किया कि वह तुरंत निर्णय लेने और संकट का सामना करने में सक्षम है।En: After this incident, Anaya realized that she was capable of making quick decisions and facing crises.Hi: उसके परिवार के सदस्यों ने उसकी इस अद्भुत जिम्मेदारीपूर्ण रवैये को देखकर उस पर अधिक भरोसा करना शुरू किया।En: Her family members began to trust her more upon witnessing her remarkable sense of responsibility.Hi: अनाया ने अपने आत्मविश्वास को फिर से महसूस किया और जाना कि अब वह किसी भी परिस्थिति का सामना कर सकती है।En: Anaya regained her self-confidence and realized that she could now face any situation.Hi: यह उसके लिए सकारात्मक बदलाव का संकेत था, जिससे उसका परिवार भी गर्वित हुआ।En: This was a sign of positive change for her, which made her family proud as well. Vocabulary Words:bustling: हलचल भरीdecorations: सजावटresponsibilities: जिम्मेदारियाँopportunity: मौकाdeteriorating: बिगड़नाrashes: चकत्तेbreathing: सांसworry: चिंताallergies: एलर्जीadministered: दियाmedication: दवाinstructions: निर्देशानुसारstabilize: स्थिर होनाimmense: भारीrelief: राहतfulfilled: निर्वाहadequately: उचितcrises: संकटremarkable: अद्भुतself-confidence: आत्मविश्वासrealized: महसूस कियाpositive: सकारात्मकpeak: चरमwarmth: गर्माहटnecessary: जरूरीcondition: तबीयतdifficulty: दिक्कतpromptly: तुरंतcapable: सक्षमwitnessing: देखकर

The Psilocybin Podcast, Tales from Eleusinia
From Market Mind to Inner Stillness: Sanjay's Return to Self

The Psilocybin Podcast, Tales from Eleusinia

Play Episode Listen Later Jan 25, 2026 35:33


For fifteen years, Sanjay lived inside the fast-paced world of London's financial trading, a high-performance identity built on precision, pressure, and emotional rationalization. In this episode, he reflects on how his career quietly shaped his sense of self, his emotional landscape, and the ways he learned to intellectualize rather than feel.After previous psychedelic experiences in more confined, unsupported environments, Sanjay arrives at Eleusinia and encounters something profoundly different, a setting rooted in safety, spaciousness, and intentional care. Through this contrast, he explores how environment, support, and community transform not only the experience itself, but the depth of integration that follows. What emerges is a powerful shift: from striving to stillness, from identity to essence, from noise to peace. Sanjay shares how his perception of self has softened, how his inner world has become quieter, and how he now accesses a deeper, more stable sense of presence.Not through performance, but through peace.

Everyday MBA
Build Financial Resilience And Manage Risk at Scale

Everyday MBA

Play Episode Listen Later Jan 24, 2026 24:20


Sanjay Chadha talks about how companies can build resilience, manage financial and cybersecurity risks, and scale successfully. Sanjay is the co-founder of SAV Associates, a global advisory firm specializing in corporate finance, cybersecurity, and risk management. Listen for strategies that blend financial acumen, operational insight, and digital risk expertise. Host, Kevin Craine Do you want to be a guest? https://Everyday-MBA.com/guest Do you want to advertise on the show? https://Everyday-MBA.com/advertise

Coronavirus: Fact vs Fiction
What Matters to You? A New Way to Heal

Coronavirus: Fact vs Fiction

Play Episode Listen Later Jan 23, 2026 30:10


Doctors have long prescribed pills and procedures. But for some people, that isn't enough. Sanjay sits down with Julia Hotz, author of The Connection Cure, to explore the rise of social prescribing—linking patients to things like volunteering, art, or nature—and how a simple question, “What matters to you?”, can change the way people heal.  Producer: Kyra Dahring Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula Executive Producer: Steve Lickteig Learn more about your ad choices. Visit podcastchoices.com/adchoices

Columbia Broken Couches
Sanjay Mishra on Office Office, Golmaal, Dhamaal & His Most Iconic Roles

Columbia Broken Couches

Play Episode Listen Later Jan 23, 2026 91:19


Welcome to PGX: Raw & RealPGX: Raw & Real is simple. I sit with people who've lived through something and/or made it big, and I try to understand what it did to them.Sometimes it gets deep, sometimes it gets weird, sometimes we end up laughing at stories that should've gone very differently — just like how real conversations go.This isn't meant to be inspiration or a template for life (for that, you can check out PGX Ideas).This space is different. It's their story, as they experienced it.In this episode, I spoke to Sanjay Mishra — Indian ActorTimestamps:00:00 - Welcome to Raw & Real03:10 - Delhi's pollution 07:45 - TV dramas, police corruption & dialogues 16:00 - Copy-paste formula of Bollywood24:00 - Bollywood copies hollywood? 32:10 - Movie Recommendations by Sanjay33:50 - Vadh 239:20 - Power of Cinema52:30 - Sanjay gets emotional while talking about his father1:10:45 - food stories / cheap thrills / simple pleasures 1:16:25 - Are we forgetting our culture?1:25:00 - What works in India?Enjoy.— Prakhar

The Core Report
#781 From Oil To Hydrogen: BPCL's Roadmap For India's Energy Transition | Govindraj Ethiraj x Sanjay Khanna, BPCL | India Energy Week 2026 | The Core Report

The Core Report

Play Episode Listen Later Jan 23, 2026 33:11


From Oil To Hydrogen BPCL's roadmap for India's energy transition is at the heart of this special episode of The Core Report from India Energy Week 2026. Financial journalist Govindraj Ethiraj is in conversation with Sanjay Khanna, Director Refineries and Additional Charge of Chairman and Managing Director at BPCL, on how India's energy future is being shaped amid global uncertainty and rising demand.As India navigates the shift from fossil fuels to cleaner energy, this discussion explores how BPCL is strengthening its core refining and fuel operations while investing in hydrogen, ethanol, renewables, petrochemicals, and digital transformation. The conversation offers a rare inside view of Project Aspire, BPCL's long-term strategy to drive growth, energy security, and decarbonisation at scale.Sanjay Khanna explains BPCL's green hydrogen journey, including falling hydrogen costs, electrolyser projects, and the role of hydrogen in mobility and transport. He also shares why oil and gas will continue to coexist with renewables in India's energy mix, how geopolitical risks are managed through diversified crude sourcing, and what makes Indian refineries uniquely flexible in handling global crude varieties.The episode also covers ethanol blending, biofuels, flex fuel readiness, LNG as a transition fuel, renewable energy expansion, carbon capture, petrochemicals, and the growing role of AI and digital tools in improving refinery efficiency and reliability. For professionals following India's economy, energy security, climate goals, or corporate strategy, this conversation delivers depth, clarity, and long-term perspective.This India Energy Week 2026 special edition is essential viewing for business leaders, consultants, policymakers, investors, and professionals seeking to understand how India is balancing growth, sustainability, and energy transition in a rapidly changing world.Subscribe to The Core Report for trusted conversations on business, energy, policy, and the forces shaping India's future.Register for India Energy Week 2026: https://www.indiaenergyweek.com/forms/register-as-a-delegate

Side Hustle to Small Business
Jodi Peterman builds a client-focused interior design practice

Side Hustle to Small Business

Play Episode Listen Later Jan 21, 2026 29:17


Jodi Peterman is the owner of Elizabeth Erin Designs, a full-service interior design firm known for creating timeless, functional spaces tailored to each client's lifestyle. What began as Jodi's passion for design has grown into a respected business serving clients with a commitment to exceptional service and practical, elegant solutions.   In this episode of the Side Hustle to Small Business® Podcast, Jodi shares her journey as a design founder and entrepreneur. She and Sanjay discuss how to choose the right interior designer, the importance of staying within budget, and the realities of balancing business ownership with maintaining mental health and managing stress.   What You'll Learn: • How to evaluate and select the right interior designer for your project • Strategies for creating beautiful spaces without going over budget • What it takes to grow and manage a successful design business • How entrepreneurs can maintain balance, reduce stress, and protect their well-being   Learn more about Elizabeth Erin Designs at https://elizabetherindesigns.com   Chapters: 00:00 Opening and introduction 8:16 Finding the right client 16:57 Pleasing your clients 19:48 Staying in budget 21:03 Balancing life and work 23:39 Balancing offices across the country 26:44 Advice for other entrepreneurs 28:05 Closing and contact   #SmallBusiness #InteriorDesign #HomeDesign   At Hiscox, we provide customized insurance solutions for small businesses and entrepreneurs, empowering you to take risks with confidence. With over 100 years of expertise, we offer coverage options like general liability and professional liability, helping you protect what matters most. Learn more at hiscox.com.

MSU Today with Russ White
MSU leads talent development for an innovation economy with Green and White Council

MSU Today with Russ White

Play Episode Listen Later Jan 21, 2026 32:30


Michigan State University has unveiled the signature initiatives of its specially appointed Green and White Council. The Council was convened by MSU President Kevin Guskiewicz and tasked with bringing forward ideas to strengthen the state's workforce, connect students to high-quality careers, and accelerate innovation across Michigan's industries. Launched by Guskiewicz in April, and co-chaired by Matt Elliott and Sanjay Gupta, the Green and White Council comprises more than a dozen prominent leaders, including representatives from Dart Container, Bedrock Detroit, Blue Cross Blue Shield of Michigan, ITC Holdings and Carhartt, representing a cross-section of industry and innovation that drive the economy.  Conversation Highlights:(1:37) - Before we discuss the signature initiatives, remind us why you thought it was important to pull this group together and what you charged them to do.(2:57) - Why did you select Matt and Sanjay to co-chair the council? And talk about the membership of the council and the variety of backgrounds you wanted to get input from.(4:16) - Why was it important to you to co-chair council and lead this initiative? And talk about the process and collaboration of the council. How did you do your work and go about selecting these three initiatives?(7:15) - Enhancing MSU's current work to connect education and industry, the members of the Green and White Council used their experience, knowledge and effort, to shape three transformative initiatives:  AI-Ready Spartans Career-Connected Spartans Spartan Catalyst Elaborate on the initiatives, and why did you settle on these three?(8:34) – What do you mean by AI-Ready Spartans?(12:00) – What are Career-Connected Spartans?(16:20) – What is a Spartan Catalyst?(21:33) – What are your thoughts on what Matt and Sanjay have been discussing?(23:23) - How do you envision the initiatives being implemented across campus over the coming weeks, months and even years?(27:36) - Will the council disband or will you keep working?(28:34) – Closing thoughts from the group.Listen to “MSU Today with Russ White” on the radio and through Spotify, Apple Podcasts, and wherever you get your shows. Conversation Transcript:Russ White (00:00):Michigan State University has unveiled the signature initiatives of the specially appointed green and white council. The council was convened by MSU President Kevin Guskiewicz and tasked with bringing forward ideas to strengthen the state's workforce, connect students to high quality careers and accelerate innovation across Michigan's industries. Launched by President Guskiewicz in April and co-chaired by Matt Elliot and Sanjay Gupta. The Green and White Council comprises more than a dozen prominent leaders, including representatives from Dart Container, from Bedrock Detroit, Blue Cross Blue Shield of Michigan, ITC Holdings and Carhartt representing a cross section of industry and innovation that drive the economy. And President Guskiewicz it's always great to have you back on the program. Good to see you again.Kevin Guskiewicz (00:51):Good to see you, Russ. Thanks for having me.Russ White (00:52):Sanjay Gupta is the Dean Emeritus, and Eli and Edythe L. Broad endowed professor in MSU's. Eli Broad College of Business. Sanjay, great to have you on again.Sanjay Gupta (01:02):Always good to be with you, Russ. Thank you.Russ White (01:03):And Matt, you've got your hands into so many things. I know Bank of America, just tell us how you'd like our audience to know about your background.Matt Elliott (01:10):Well, I'm the former president of Bank of America, Michigan, and now I lead a group of people under the banner of Blue Lake Ideas. And what we do is we consult with companies, boards, and institutions to help them lead through a world of accelerating change.Russ White (01:24):Excellent.Kevin Guskiewicz (01:25):And he's a proud Spartan alum. Russ White (01:26):Kevin, before we discuss the signature initiatives, remind us why you thought it was important to pull this group together and what you charge them to do.Kevin Guskiewicz (01:38):Well, Russ, I've said since I got here about 22 months ago now, that I wanted to be sure that Michigan State was always leading, that we were viewed as the leaders in research, education, service to the state, but also to the nation and the world. And we're going to lead in how we redefine the way in which we can better prepare our graduates for the workforce demands of today and tomorrow, jobs and careers that don't even exist today, that our graduates will be needing to be prepared for over the next three, four decades. So we charged them with gaining a better understanding from industry leaders in about five or six different sectors as to where higher ed is not delivering, it's going to be needed for the future, and I couldn't be happier with where we are. That's sort of one of the initiatives and others really around how we can better connect our graduates t...

Biohacking Superhuman Performance
#405: Heart Attacks Aren't What You Think | The Plaque LIE That Changes Everything (Cardiology 2.0) With Dr. Sanjay Bhojraj

Biohacking Superhuman Performance

Play Episode Listen Later Jan 20, 2026 91:45


Today, I'm joined by the deeply thoughtful and refreshingly honest Dr. Sanjay Bhojraj, a self-described "curious cardiologist" who spent decades treating heart attacks in the cath lab — before stepping away to ask a bigger question: Why are we waiting for the crisis instead of preventing it?   Episode Timestamps: Welcome to Longevity & episode setup … 00:00:00 Dr. Bhojraj's shift from ER cardiology to prevention … 00:06:30 Why most heart attacks aren't caused by big blockages … 00:09:15 Stress, nervous system load & heart attack risk … 00:13:10 CIMT explained: what it measures (and what it misses) … 00:26:40 Calcium scores vs CT angiograms … 00:35:45 CLEERLY scan: seeing dangerous soft plaque … 00:38:45 Can plaque actually regress? … 00:41:55 When heart scans make patients less afraid … 00:44:05 When should you test — even without symptoms? … 00:45:50 Why age 45 is a major cardiovascular inflection point … 00:47:10 Hormones, estrogen loss & women's heart risk … 00:50:10 Why cardiology still misunderstands women … 00:54:30 Small dense LDL, ApoB & oxidized cholesterol … 01:02:00 Why fixing inflammation matters more than numbers … 01:05:50   Our Amazing Sponsors: Regenerive - Built around clinically validated Longufera (Ash X4) to support core aging pathways—so it's not just "healthy aging" in theory. Go to regenerive.co and use code NAT25 to save 25%   Mitopure® Longevity Gummies are the only clinically proven Urolithin A gummies supporting mitochondrial health — one of the key hallmarks of aging. Get 35% off a one-month subscription at Timeline.com/Nat2026 *Special deal through January 2026.   PW1 by Puori — A clean, high-quality whey protein that's third-party tested for over 200 contaminants and smooth enough to feel like a treat while supporting muscle, metabolism, and bone strength. Go to puori.com/NAT and use code NAT for 32% off your first subscription or 20% off anything on the site.   Nat's Links:  YouTube Channel Join My Membership Community Sign up for My Newsletter  Instagram  Facebook Group

Radio Campus Tours – 99.5 FM
Strickly Good Sound – #54

Radio Campus Tours – 99.5 FM

Play Episode Listen Later Jan 20, 2026


La Playlist: Grock (instrumental) Daniella – Down town man Rdx – Gangsta want to have fun 2 Elephant Man – Gangsta rock Ce’cile – Bun rapist Macka Diamond – Dem gal deh Chico – Gangsta anthem Kiprich – Hombre T.O.K – Gangsta Bling dawg – Fool Voicemail – Get crazy Sanjay – Hard pon me […] L'article Strickly Good Sound – #54 est apparu en premier sur Radio Campus Tours - 99.5 FM.

Auto Supply Chain Prophets
Agentic AI Isn't the Future. It's the Line Between Winners and Laggards

Auto Supply Chain Prophets

Play Episode Listen Later Jan 19, 2026 33:08 Transcription Available


Automotive manufacturing leaders have no shortage of data, but only those who turn it into action are winning, and AI is the accelerator.In this milestone episode, Jan Griffiths is joined by Sanjay Brahmawar, CEO of QAD, and Dr. Bryan Reimer, MIT Research Scientist and author of How to Make AI Useful, for a grounded conversation about how AI is creating real advantage in automotive manufacturing.The challenge facing automotive manufacturing leaders is not visibility. Leaders know where problems exist. The issue is that action often stalls between insight and execution. Dashboards explain what happened. They do not decide what happens next.Sanjay and Bryan draw a clear distinction between systems of record and systems of action. Systems of record observe. Systems of action decide, execute, and learn. Agentic AI belongs in the second category. It creates value when it removes friction from work, accelerates routine decisions, and gives people better context at the moment action is required.Frontline teams in automotive manufacturing do not resist AI. They adopt it when it respects their expertise and helps them do their jobs better. Adoption follows usefulness, not mandates. When AI amplifies human judgment instead of supervising it, execution speed improves and results follow.This episode challenges automotive manufacturing leaders to stop treating AI as a reporting layer and start using it as an execution engine. The organizations pulling ahead are not waiting for perfect conditions. They are starting small, learning fast, and letting action build confidence.Themes Discussed in this episode:Why data visibility alone does not drive performance in automotive manufacturingSystems of record vs systems of actionHow AI removes friction from automotive manufacturing operationsFrontline-first AI adoption in plantsAgentic AI as an execution multiplierLeadership ownership of decisionsBuilding momentum with 60 to 90-day winsFeatured Guests: Name: Sanjay BrahmawarTitle: CEO of QAD About: Sanjay Brahmawar is the CEO of QAD, a cloud software company delivering cloud-based solutions for manufacturers and global supply chains. With more than two decades of experience leading global technology businesses, he brings deep expertise in digital transformation, AI, IoT, and data-driven platforms, built through senior leadership roles at IBM and Software AG.Connect: LinkedInName: Dr. Bryan ReimerAbout: Dr. Bryan Reimer is a Research Scientist at the MIT Center for Transportation & Logistics and a key member of the MIT AgeLab. He is also the author of How to Make AI Useful: Moving beyond the hype to real progress in business, society and life. His work focuses on how...

Raj Shamani - Figuring Out
Why Indian Homes Feel Smaller: Space, Furniture, Design & Planning | Sanjay Puri | FO459 Raj Shamani

Raj Shamani - Figuring Out

Play Episode Listen Later Jan 17, 2026 77:08


To know more about the Real Advice initiative by Birla Estates, visit: https://www.birlaestates.com/realadvice/Guest Suggestion Form: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://forms.gle/bnaeY3FpoFU9ZjA47⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Disclaimer: This video is intended solely for educational purposes and opinions shared by the guest are his personal views. We do not intent to defame or harm any person/ brand/ product/ country/ profession mentioned in the video. Our goal is to provide information to help audience make informed choices. The media used in this video are solely for informational purposes and belongs to their respective owners.Order 'Build, Don't Talk' (in English) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/eCfijRu⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Order 'Build Don't Talk' (in Hindi) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/4wZISO0⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow Our Whatsapp Channel: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.whatsapp.com/channel/0029VaokF5x0bIdi3Qn9ef2J⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe To Our Other YouTube Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@rajshamaniclips⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@RajShamani.Shorts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠(00:00) - Intro(03:36) - His first introduction to architecture(06:07) - What to look out for while buying a house(09:55) - What's so special about Zaha Hadid?(12:00) - Top 3 Indian architectural mavericks(17:21) - Two global architectural mavericks(22:08) - Sustainable projects(34:23) - Why are luxury hotel bathrooms made of glass?(36:25) - What is contextual design?(39:08) - Different methods used to build different houses(43:23) - Trends in the residential segment(49:06) - Why Indians are obsessed with storage(52:27) - How to improve your house(1:00:06) - How lighting affects your space(1:03:44) - Do open kitchens work in India?(1:06:45) - Common mistakes people make while building a house(1:15:48) - BTS(1:16:18) - OutroIn today's episode, we have Sanjay Puri, one of India's leading architects, sharing practical and eye-opening insights on why most Indian homes feel smaller, hotter, and more uncomfortable than they should. We talk about the basic design mistakes people make while buying or building homes, and why space planning matters more than luxury interiors. We also talk about sustainable buildings and what real sustainability means for a common homebuyer beyond just environmental labels. The conversation highlights how bold architectural thinking can shape cities, improve quality of life, and even transform local economies.Subscribe for more such conversations!Follow Sanjay Puri Here:Instagram: https://www.instagram.com/sanjay_puri_architects/Follow Birla Estates Here:Instagram: https://www.instagram.com/birlaestates/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠About Raj ShamaniRaj Shamani is an Entrepreneur at heart that explains his expertise in Business Content Creation & Public Speaking. He has delivered 200+ speeches in 26+ countries. Besides that, Raj is also an Angel Investor interested in crazy minds who are creating a sensation in the Fintech, FMCG, & passion economy space.To Know More,Follow Raj Shamani On ⤵︎Instagram @RajShamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/rajshamani/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter @RajShamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/rajshamani⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook @ShamaniRaj ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.facebook.com/shamaniraj⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn - Raj Shamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/rajshamani/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠About Figuring OutFiguring Out Podcast is a Candid Conversations University where Raj Shamani brings raw conversations with the Top 1% in India.

Coronavirus: Fact vs Fiction
How Far Would You Go to Replace Your Body? Mary Roach Has Thoughts

Coronavirus: Fact vs Fiction

Play Episode Listen Later Jan 16, 2026 26:18


For centuries, humans have tried to repair and replace our body parts -- from brass noses and pig organs to today's lab-grown tissue. So where do we stand now?  Sanjay sits down with author Mary Roach to discuss her newest book, Replaceable You: Adventures in Human Anatomy, which explores the wild history and newest experiments behind human “upgrades,” from 3D‑printed muscle to the ethics of elective amputation and what these innovations mean for our aging bodies.     Our show was produced by Jennifer Lai with assistance from Leying Tang.   Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula  Learn more about your ad choices. Visit podcastchoices.com/adchoices

Side Hustle to Small Business
Lesle Lane expands a family enterprise

Side Hustle to Small Business

Play Episode Listen Later Jan 14, 2026 33:04


Lesle Lane is the third-generation owner of Studio 13 Corporate Photography, which began as a family business founded by her grandparents and has evolved into a respected photography studio known for its craft, consistency, and client-focused approach. Her leadership continues a legacy built over decades, with a focus on adapting to new technologies while honoring the studio's original artistic vision.   In this episode, Lesle shares her inspiring journey from taking over the studio in 1992 to navigating the challenges of running a multi-generational business. She and host Sanjay discuss how youth sports can lay the foundation for entrepreneurship, how to maintain quality and consistency as you hire, and what it takes to preserve a legacy while continuing to grow.   What you'll learn: • How to successfully lead and evolve a multi-generational family business • The connection between youth sports and entrepreneurial resilience • Strategies for maintaining quality and consistency when hiring • The story behind Studio 13 Photography's long-term success   Learn more about Studio 13 Corporate Photography at https://www.studio13online.com/   Chapters: 00:00 Introduction and Background 7:28 Taking over a family business 11:49 Maintaining family legacy 14:00 Shifting the business's focus 15:50 Expanding the business 18:55 Incorporating AI into the business 23:43 Work-life balance 28:43 Reflecting on the business 30:43 Advice for other entrepreneurs 31:55 Closing and contact   #SmallBusiness #Photography #Podcast   At Hiscox, we provide customized insurance solutions for small businesses and entrepreneurs, empowering you to take risks with confidence. With over 100 years of expertise, we offer coverage options like general liability and professional liability, helping you protect what matters most. Learn more at hiscox.com.

The Show Up Fitness Podcast
Becoming a Trainer in India: Inside the Mind of Coach Sanjay Duseja

The Show Up Fitness Podcast

Play Episode Listen Later Jan 14, 2026 42:51 Transcription Available


Send us a text if you want to be on the Podcast & explain why!Coach Sanjay IG: yourfitnesscoach.inTired of hearing “the gym keeps 70%, trainers get 30%” and wondering how to break the cycle? We sat down with Sanjay Duseja, who went from a small town in Madhya Pradesh to training 2,000+ clients across 40 countries, to map a smarter path through India's fitness industry. His story shows you don't need a big city to win—you need competence built on four pillars: education, experience, communication, and living what you teach.We dig into why low entry barriers and a lack of regulation depress pay and quality, and how owners and trainers often talk past each other. Sanjay explains how he funded early certifications while working in IT, moved online during lockdowns, and built trust with credentials, case studies, and simple, effective assessments. You'll hear why a goal-first approach beats cookie-cutter routines: athletes can chase intensity and frequency because performance is their job, while general clients need sustainable programming that fits around work, family, and recovery. Assess movement, strength, and cardio, then tailor exercise selection, volume, and frequency to what the client actually wants—like playing with their grandchild for an hour without gasping.Sanjay's “Year of Growth” experiment adds rare empathy. By deliberately gaining significant weight and then reversing course, he experienced breathlessness, back pain, poor sleep, and mental strain firsthand, and translated those lessons into coaching that meets clients where they are. We close with career design: escape the trap of 10–12 sessions a day by building rare skills, specializing intelligently, and capturing proof of outcomes so your hours go down and your income goes up. In a market with low barriers, top 1 percent competency stands out quickly—if you commit to learning and apply it with integrity.If this conversation sparked an idea, subscribe, share with a trainer friend, and leave a quick review. Tell us: which pillar are you doubling down on next—education, experience, communication, or walking the talk?Want to become a SUCCESSFUL personal trainer? SUF-CPT is the FASTEST growing personal training certification in the world! Want to ask us a question? Email info@showupfitness.com with the subject line PODCAST QUESTION to get your question answered live on the show! Website: https://www.showupfitness.com/Become a Successful Personal Trainer Book Vol. 2 (Amazon): https://a.co/d/1aoRnqANASM / ACE / ISSA study guide: https://www.showupfitness.com

Coronavirus: Fact vs Fiction
Why Does Everyone Have the Flu?

Coronavirus: Fact vs Fiction

Play Episode Listen Later Jan 13, 2026 13:36


If it seems like everyone you know has the flu right now, you're not that far off.  The US has had a record-breaking flu season and isn't over yet. With the help of CNN medical correspondent Meg Tirrell, Dr. Sanjay Gupta explains how it's not too late to protect yourself. Plus, Sanjay breaks down the recent changes to the US dietary guidelines. Producer: Sofia Sanchez Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula Executive Producer: Steve Lickteig Learn more about your ad choices. Visit podcastchoices.com/adchoices

Essentially You: Empowering You On Your Health & Wellness Journey With Safe, Natural & Effective Solutions
711: Estrogen, Inflammation, and Your Heart Health: What Every Woman Needs to Know with Dr. Sanjay Bhojraj

Essentially You: Empowering You On Your Health & Wellness Journey With Safe, Natural & Effective Solutions

Play Episode Listen Later Jan 13, 2026 65:29


Believe it or not, but there is as much cardiovascular risk associated with smoking as there is with the drop in estrogen in perimenopause and menopause.  But don't worry–  I've brought the hilarious, talented, and brilliant Dr. Sanjay Bhojraj onto the podcast to dive into why midlife is a uniquely important—and often overlooked—window for cardiometabolic health, and what you can do about it.  In this entertaining episode, Dr. Bhojraj breaks down why the drop in estrogen matters just as much as traditional risk factors and why so many women's concerns are missed in conventional care.  We also reframe hormone therapy not as symptom management, but as a powerful tool for optimization, longevity, and long-term heart and metabolic health.  With humor and clarity, this conversation reminds you that you don't need to overhaul your entire life —small, strategic changes truly add up. Tune in here to learn how to simply and efficiently boost your health, starting with your heart!  Dr. Sanjay Bhojraj Dr. Sanjay Bhojraj is an interventional cardiologist and Institute for Functional Medicine–certified physician who bridges conventional cardiology with functional and lifestyle medicine. He spent 20 years treating disease with procedures and medications, but then shifted his focus toward uncovering root causes to help patients reverse chronic cardiometabolic conditions through food, sleep, stress optimization, and lifestyle change. Dr. Sanjay is the founder of the Well12 program and host of The Curious Cardiologist podcast, where he explores the science of longevity, inflammation, and human performance. His passion is helping people reclaim their health through simple, lasting changes. IN THIS EPISODE How Dr. Bhojraj moved into functional cardiac medicine  What age to start paying closer attention to cardiovascular health Advocating for yourself and what labs you should be asking for  Why taking care of your cardiovascular health is crucial during midlife hormonal changes  Looking at your metabolic health before starting HRT  Metabolic markers to pay attention to in midlife  Non-negotiable small tweaks to improve your heart health  How bringing joy to your life can be the best tool for your health  QUOTES “We need to talk about sleep, we need to have an intelligent discussion about diet and what that means. And we need to think about hormones too, because hormonal health is so central to cardiovascular health, and that is just not something that we talk about in cardiovascular training.” “It's like contributing to your 401k in your twenties, right? Like $200 doesn't seem like a lot, but when you're 65, you're now retiring a millionaire– depending on how well you invest, because you made those small, little, little habits, those little changes. So that decade of the forties becomes super important, particularly because in women, that's when perimenopause starts to happen, right?” “The point is that I think first of all, start with something that you can do, right? Again, it's 1% shifts. You don't have to change a thousand different things.”RESOURCES MENTIONED Order my new book: The Perimenopause Revolution https://peri-revolution.com/ Check out Dr. Sanjay's Signature Well12 Metabolic Program HERE! https://lagunamedicine.com/well12 Dr. Sanjay's Website Dr. Sanjay on Instagram Dr. Sanjay's Podcast: The Curious Cardioloist Dr. Sanjay on Facebook RELATED EPISODES  704: Hormone Intelligence for Women in Midlife: How to Thrive Through Perimenopause with Dr. Aviva Romm 690: The Perimenopause Revolution: Why midlife isn't the end — it's the beginning of your most energized, powerful, and vibrant self 678: How to Turn Perimenopause Into Your Metabolic Window of Opportunity + My Simple Daily Protocol To Feel Amazing #451: Why Do Women Have a Higher Cardiometabolic Mortality Rate Than Men? With Dr. Sara Gottfried

The Scuffed Soccer Podcast | USMNT, Yanks Abroad, MLS, futbol in America
#657: A superdraft of Pochettino's call-ups since he took over

The Scuffed Soccer Podcast | USMNT, Yanks Abroad, MLS, futbol in America

Play Episode Listen Later Jan 13, 2026 87:52


Vince, Greg, Sanjay and Belz duke it out to see who can craft the best team in a draft of all the players Poch has called up. Let the games begin! I'll post the lineups and some polls shortly.Germany trip: https://docs.google.com/forms/d/e/1FAIpQLSfI4Cp1VpS2eCphsNjf6QHdaRDq86Tf-FeUhJ2tQ0RzkbxQhw/viewform Skip the ads! Subscribe to Scuffed on Patreon and get all episodes ad-free, plus any bonus episodes. Patrons at $5 a month or more also get access to Clip Notes, a video of key moments on the field we discuss on the show, plus all patrons get access to our private Discord server, live call-in shows, and the full catalog of historic recaps we've made: https://www.patreon.com/scuffedAlso, check out Boots on the Ground, our USWNT-focused spinoff podcast headed up by Tara and Vince. They are cooking over there, you can listen here: https://boots-on-the-ground.simplecast.comAnd check out our MERCH, baby. We have better stuff than you might think: https://www.scuffedhq.com/store Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Silicon Valley Tech And AI With Gary Fowler
No Lines, No Lag: How Digital Tourism is Creating Seamless, Personalized Travel Experiences with Sanjay Bhatia

Silicon Valley Tech And AI With Gary Fowler

Play Episode Listen Later Jan 12, 2026 27:45


“No Lines, No Lag: How Digital Tourism Is Redefining the Travel Experience” with Sanjay Bhatia. Explore how technology is transforming travel by removing friction and delays—from mobile check-ins and keyless hotel entry to AI-powered chatbots and real-time booking platforms. Learn how digital tools make every step faster, smarter, and more personalized, allowing travelers to move seamlessly through their journeys while staff focus on meaningful interactions. Digital tourism isn't just speeding things up—it's reshaping what travel feels like.#DigitalTourism #TravelTech #SmartTravel #SeamlessTravel #AIinTravel #PersonalizedTravel #CustomerExperience #TravelInnovation #NoLinesNoLag #EffortlessTravel

Coronavirus: Fact vs Fiction
Why You're Breathing Wrong, and How to Fix It

Coronavirus: Fact vs Fiction

Play Episode Listen Later Jan 9, 2026 30:04


Chronic disease, anxiety, ADHD, and even the shape of a person's face could be consequences of dysfunctional breathing. And most of us, it turns out, are doing it wrong – but it's never too late to fix it. Sanjay sits down with journalist James Nestor to discuss the fifth anniversary edition of Breath: The New Science of a Lost Art, and how simple changes to the way we breathe can start improving our health right away.     Our show was produced by Jesse Remedios.  Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula  Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Traidar: A Traitors Podcast
Listener Q&A The Traitors UK S4 E4

The Traidar: A Traitors Podcast

Play Episode Listen Later Jan 9, 2026 43:26


In our bonus ep, we discuss YOUR questions, thoughts, and theories all about episode four of The Traitors UK Season Four! Hosted by Matthew Keeley and guest co-host Sanjay Lago!Sanjay's Insta: @sanjaylagoSanjay's Bluesky: https://bsky.app/profile/creativesanj.bsky.socialBritish Vogue Article: https://www.vogue.co.uk/article/the-traitors-season-4-unconscious-biasIndependent Article: https://www.independent.co.uk/arts-entertainment/tv/features/bbc-traitors-season-4-racial-bias-b2896660.htmlMatthew's Insta/TikTok: @matthewjkeeleyMatthew's Facebook: facebook.com/matthewkeeleywriterMatthew's NEW Poetry Collection: https://www.drunkmusepress.com/shop/p/from-the-juniper-room-by-matthew-keeleyPodcast Merch: thetraidar.redbubble.comPodcast Survey: https://podcastsurvey.typeform.com/to/XU9fJKnOVoice Message for the Pod: memo.fm/thetraidarpodcastPodcast Ko-fi page: https://ko-fi.com/matthewkeeleyPodcast Instagram, TikTok, and YouTube: @thetraidarpodcastEmail: thetraidarpodcast@gmail.com Hosted on Acast. See acast.com/privacy for more information.

The Traidar: A Traitors Podcast
The Traitors UK S4 E4

The Traidar: A Traitors Podcast

Play Episode Listen Later Jan 9, 2026 99:39


A deep-dive into episode four of The Traitors UK Season Four! Hosted by Matthew Keeley and guest co-host Sanjay Lago!Sanjay's Insta: @sanjaylagoSanjay's Bluesky: https://bsky.app/profile/creativesanj.bsky.social'The Betrayers' A Traitor Experience in Brighton: https://www.eventbrite.co.uk/e/the-betrayers-a-traitor-experience-tickets-1977032323901Joe's Insta: @jcruffinoMatthew's Insta/TikTok: @matthewjkeeleyMatthew's Facebook: facebook.com/matthewkeeleywriterMatthew's NEW Poetry Collection: https://www.drunkmusepress.com/shop/p/from-the-juniper-room-by-matthew-keeleyPodcast Merch: thetraidar.redbubble.comPodcast Survey: https://podcastsurvey.typeform.com/to/XU9fJKnOVoice Message for the Pod: memo.fm/thetraidarpodcastPodcast Ko-fi page: https://ko-fi.com/matthewkeeleyPodcast Instagram, TikTok, and YouTube: @thetraidarpodcastEmail: thetraidarpodcast@gmail.com Hosted on Acast. See acast.com/privacy for more information.

CarDealershipGuy Podcast
The Time-to-Sale Problem Dealers Are Dying to Fix (and Who's Figured it out) | Pre-NADA AI Spotlight #2 | Sanjay Varnwal, Co-Founder & CEO at Spyne.ai

CarDealershipGuy Podcast

Play Episode Listen Later Jan 8, 2026 32:12


Today I'm joined by Sanjay Varnwal, Co-Founder & CEO at Spyne.ai. In part 2 of our Pre NADA AI Spotlight series we break down how AI is actually being deployed inside dealerships today—from faster merchandising to smarter lead handling—and where it's delivering real ROI versus hype. Sanjay explains how workflow optimization and tech partnerships are reshaping the dealer tech stack, why voice AI has accelerated dramatically in the last 6–9 months, and what data security and compliance mean in an AI-first world. This episode is brought to you by: 1. Lotlinx - What if ChatGPT actually spoke dealer? Meet LotGPT — the first AI chatbot built just for car dealers. Fluent in your market, your dealership, and your inventory, LotGPT delivers instant insights to help you merchandise smarter, move inventory faster, and maximize profit. It pulls from your live inventory, CRM, and Google Analytics to give VIN-specific recommendations, helping dealers price vehicles accurately, spot wasted spend, and uncover the hottest opportunities — all in seconds. LotGPT is free for dealers, but invite-only. Join the waitlist now @ http://Lotlinx.com/LotGPT 2. Merchant Advocate - Merchant Advocate saves businesses money on credit card fees WITHOUT switching processors. Find out how they can help your dealership with a FREE analysis. Click on @ http://merchantadvocate.com/cdg for more. 3. Spyne - Meet Vini by Spyne — your dealership's AI-powered Conversational Agent. Vini answers every call and chat, engages customers instantly, and uncovers hidden revenue in your CRM and inventory. Every missed opportunity? Found and converted. Learn more at http://spyne.ai/vini-cdg Check out Car Dealership Guy's stuff: For dealers: CDG Circles ➤ ⁠⁠https://cdgcircles.com/⁠⁠ Industry job board ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://jobs.dealershipguy.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Dealership recruiting ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.cdgrecruiting.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Fix your dealership's social media ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.trynomad.co⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Request to be a podcast guest ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.cdgguest.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ For industry vendors: Advertise with Car Dealership Guy ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.cdgpartner.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Industry job board ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://jobs.dealershipguy.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Request to be a podcast guest ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.cdgguest.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Topics: 00:42 What is Sanjay's background in hospitality? 01:41 How is AI impacting the automotive industry? 02:48 What are the biggest challenges in AI adoption? 05:50 Explaining Spyne's technology in detail 08:55 How does voice AI improve dealer engagement? 14:13 What is the future of AI in dealerships? Car Dealership Guy Socials: X ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠x.com/GuyDealership⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠instagram.com/cardealershipguy/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠tiktok.com/@guydealership⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ LinkedIn ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠linkedin.com/company/cardealershipguy⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Threads ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠threads.net/@cardealershipguy⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠facebook.com/profile.php?id=100077402857683⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Everything else ➤ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠dealershipguy.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Side Hustle to Small Business
Miriam Schulman built a life around her creative work

Side Hustle to Small Business

Play Episode Listen Later Jan 7, 2026 31:45


Miriam Schulman is the Founder of Schulman Art, which began as a resource for creative entrepreneurs and has evolved into a comprehensive coaching service helping artists turn their talents into sustainable businesses. Her innovative approach supports artists' success through mentorship, educational resources, and community-building.   In this episode of the Side Hustle to Small Business® Podcast, Miriam shares her inspiring journey from working in corporate finance to launching Schulman Art. She and host Sanjay discuss monetizing creativity, building supportive communities for artists, overcoming challenges as a creative entrepreneur, and her book "Artpreneur".   What You'll Learn: • How to transform a creative passion into a sustainable business • Strategies for monetizing and marketing artistic work • Building supportive communities for creative entrepreneurs • The story behind launching Artpreneur and empowering artists   Learn more about Miriam and Schulman Art at https://www.schulmanart.com/   Chapters: 00:00 Introduction and background 8:30 Overcoming nerves 12:00 Building a coaching business 15:33 Challenges artists face 19:42 Balancing life and work 25:58 Incorporating AI into your business 27:55 Reflecting on the business 29:55 Advice for other entrepreneurs 30:32 Closing and contact   #SmallBusiness #SideHustle #art

KeyLIME
[32] Designing Learning That Learns: The Promise of Precision Medical Education

KeyLIME

Play Episode Listen Later Jan 6, 2026 35:57


In this episode of KeyLIME+, Adam speaks with Dr. Sanjay Desai to explore the concept of precision education in medical training. They discuss how data and technology can personalize medical education, making it more efficient and tailored to individual learner needs. Sanjay emphasizes the importance of using data to assess learner performance and the need for a shift in power towards learners, allowing them greater agency in their education. The conversation also touches on the foundational principles of precision education, the challenges of implementing these innovations, and the balance between fostering innovation and ensuring trust in medical education.   Length of episode:  35:28  Contact    Contact us: keylime@royalcollege.ca      Follow: Dr. Adam Szulewski https://x.com/Adam_Szulewski       

Beauty Bytes with Dr. Kay: Secrets of a Plastic Surgeon™
794: Needle-Free Regenerative Medicine: Exosomes, Hair Growth, & The Science of Juvasonic with Dr. Sanjay Batra

Beauty Bytes with Dr. Kay: Secrets of a Plastic Surgeon™

Play Episode Listen Later Jan 6, 2026 38:51


In this episode of Beauty Bytes, I am exploring the frontier of needle-free regenerative medicine with Dr. Sanjay Batra, a globally recognized scientist and the creator of Juvasonic. We discuss how his device uses a unique combination of sonic vibration and dermabrasion to open the stratum corneum for 10-15 minutes, allowing large molecules like exosomes, polynucleotides, and PRP to penetrate deep into the dermis without a single needle prick.Dr. Batra shares why microneedling might not be the best delivery method for serums (hint: the bleeding actually pushes products out!) and reveals incredible data on using this technology for hair density and restoration . We also cover a fascinating off-label use: how Juvasonic can potentially break down filler nodules (like Radiesse lumps) in just seconds without invasive measures . This is the future of pain-free, high-compliance aesthetic care.

Coronavirus: Fact vs Fiction
Pain Becomes Personal for Sanjay

Coronavirus: Fact vs Fiction

Play Episode Listen Later Dec 30, 2025 19:14


Even as a trauma neurosurgeon, Dr. Sanjay Gupta thought he understood pain. But, when his mother fell and broke her back, it changed the way he thought about pain and how it impacts people's lives.  Learn more about your ad choices. Visit podcastchoices.com/adchoices

Coronavirus: Fact vs Fiction
Sanjay's Top 10 Health Stories of 2025

Coronavirus: Fact vs Fiction

Play Episode Listen Later Dec 23, 2025 11:55


From the resurgence of measles to a new way to treat pain, 2025 was a challenge for public health while still offering moments of hope. Sanjay recaps the year with his top 10 health stories of 2025. Producer & Showrunner: Amanda Sealy Medical Writer: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula Executive Producer: Steve Lickteig Learn more about your ad choices. Visit podcastchoices.com/adchoices

sanjay health stories
The Startup Junkies Podcast
4: How Believ.ai Powers Seamless Onboarding for Supply Chain Platforms with Sanjay Ahuja

The Startup Junkies Podcast

Play Episode Listen Later Dec 22, 2025 18:32


SummaryIn this episode of the Fuel Podcast, host Caleb Talley sits down with Sanjay Ahuja, founder of Believ.ai, to discuss his inspiring entrepreneurial journey. Sanjay shares how, after twenty-seven years of thriving in global corporate roles across India, Dubai, and the U.S., he embraced his entrepreneurial spirit to address a crucial gap in AI: clean data.Believ.ai emerged from Sanjay's firsthand experience with the complexities of onboarding merchants efficiently and securely. The platform's focus is on streamlining onboarding for marketplaces, payments, and logistics companies, tackling the dual challenge of speed and fraud prevention. Additionally, Sanjay explains his experience with Fuel Accelerator, describing Bentonville's unique, supportive environment for tech founders, and praising the city's vibrant entrepreneurial ecosystem.For those looking to launch their own ventures, Sanjay offers sage advice: “Be an entrepreneur at an early stage of life. You'll fail, and that's fine—you'll learn and move on to build something bigger and better.” This episode is a must-listen for founders seeking inspiration, practical insights, and a sense of belonging in the startup community!Show Notes(00:00) Introduction(04:22) Believ: An AI-Powered Onboarding Platform(07:07) Sanjay's Fuel and Bentonville Experience(12:20) Insights into the Fuel Accelerator(16:09) Adopting an Entrepreneurial Mindset(17:34) Closing ThoughtsLinksCaleb TalleyFuel AcceleratorFuel Accelerator YouTubeSanjay AhujaBeliev.ai

Everyday Wellness
Ep. 531 Your Heart's Not Just Skipping Beats – The Shocking Truth About Palpitations in Women | Women's Cardiovascular Health with Dr. Sanjay Bhojraj

Everyday Wellness

Play Episode Listen Later Dec 20, 2025 61:31


I am thrilled to reconnect with Dr. Sanjay Bhojraj today. Dr. Bhojraj is a board-certified interventional cardiologist who became a pioneer in functional medicine. In our conversation, we dive into palpitations, which are a common complaint among perimenopausal and menopausal women. We explore red flag symptoms, the physiological effects of progesterone, estrogen, and testosterone as they relate to heart arrhythmias, EKG changes during the perimenopause-to-menopause transition, and wearable technologies. We unpack the differences between benign and more concerning arrhythmias, risk factors for atrial fibrillation, and the process of taking a thorough history, ordering the correct tests, and using imaging or sleep studies when appropriate. We cover treatment pathways, from lifestyle modifications to medications, channelopathies, and the genetic propensities for conditions such as Long QT, Brugada Syndrome, WPW (Wolff-Parkinson-White syndrome), and sudden cardiac death. We also highlight the importance of genetic testing for individuals with a family history of those conditions. Today's conversation with Dr. Sanjay Bhojraj is full of practical wisdom and clinical pearls, so you will most likely want to listen to it more than once. IN THIS EPISODE, YOU WILL LEARN: Why thyroid function should always be taken into account when assessing heart rhythm issues   How stress and life circumstances can trigger palpitations  The benefits of magnesium supplementation for supporting heart health What ventricular arrhythmias (from the bottom chambers) and atrial arrhythmias (from the top chambers) are commonly related to The value of monitoring for identifying the nature and severity of arrhythmias How sleep apnea can increase the risk of arrhythmia   The importance of exercise, stress management, and healthy lifestyle habits for supporting heart rhythm Why certain arrhythmias may require procedural interventions Why various types of athletic activity matter when evaluating arrhythmias How genetic factors can impact specialized heart assessments Connect with Cynthia Thurlow   Follow on X, Instagram & LinkedIn Check out Cynthia's website Submit your questions to support@cynthiathurlow.com Join other like-minded women in a supportive, nurturing community (The Midlife Pause/Cynthia Thurlow)  Cynthia's Menopause Gut Book is on presale now! Cynthia's Intermittent Fasting Transformation Book The Midlife Pause supplement line Connect with Dr. Sanjay Bhojraj On his website On social media: @DoctorSanjayMD The Curious Cardiologist Podcast

Side Hustle to Small Business
Aquila Mendez-Valdez scales through franchising and community

Side Hustle to Small Business

Play Episode Listen Later Dec 17, 2025 31:16


Aquila Mendez-Valdez is the Founder of Haute in Texas, which began as a personal blog and has evolved into a full-service public relations agency empowering women-led businesses. Her innovative approach combines storytelling, strategy, and community to help brands grow with authenticity and purpose.   In this episode, Aquila shares her inspiring journey from launching a business while pregnant to scaling her company into a thriving agency. She and Sanjay discuss hiring your first employee, creating a franchise model, and the importance of surrounding yourself with a strong, supportive community.   What You'll Learn: • How to build a business while balancing family and entrepreneurship • When and how to hire your first employee • What it takes to develop a franchise model • The power of community in sustaining business growth   Learn more about Haute in Texas at https://hitpr.com/   Chapters  00:00 Introduction and background  5:53 Overcoming nerves 11:31 Growing a company culture 13:42 Differentiating your company 17:23 Advice for small businesses 20:12 Scaling the business 24:52 Balancing work and life 30:08 Closing and contact    #SmallBusiness #PublicRelations #Podcast   At Hiscox, we provide customized insurance solutions for small businesses and entrepreneurs, empowering you to take risks with confidence. With over 100 years of expertise, we offer coverage options like general liability and professional liability, helping you protect what matters most. Learn more at hiscox.com.

The Secure Developer
A Vision For The Future Of Enterprise AI Security With Sanjay Poonen

The Secure Developer

Play Episode Listen Later Dec 16, 2025 27:30


Episode SummaryThe future of cyber resilience lies at the intersection of data protection, security, and AI. In this conversation, Cohesity CEO Sanjay Poonen joins Danny Allan to explore how organisations can unlock new value by unifying these domains. Sanjay outlines Cohesity's evolution from data protection to security in the ransomware era, to today's AI-focused capabilities, and explains why the company's vast secondary data platform is becoming a foundation for next-generation analytics.Show NotesIn this episode, Sanjay Poonen shares his journey from SAP and VMware to leading Cohesity, highlighting the company's mission to protect, secure, and provide insights on the world's data. He explains the concept of the "data iceberg," where visible production data represents only a small fraction of enterprise assets, while vast amounts of "dark" secondary data remain locked in backups and archives. Poonen discusses how Cohesity is transforming this secondary data from a storage efficiency problem into a source of business intelligence using generative AI and RAG, particularly for unstructured data like documents and images.The conversation delves into the technical integration of Veritas' NetBackup data mover onto Cohesity's file system, creating a unified platform for security scanning and AI analytics. Poonen also elaborates on Cohesity's collaboration with NVIDIA, explaining how they are building AI applications like Gaia on the NVIDIA stack to enable on-premises and sovereign cloud deployments. This approach allows highly regulated industries, such as banking and the public sector, to utilize advanced AI capabilities without exposing sensitive data to public clouds.Looking toward the future, Poonen outlines Cohesity's "three acts": data protection, security (ransomware resilience), and AI-driven insights. He and Danny Allan discuss the critical importance of identity resilience, noting that in an AI-driven world, the security perimeter shifts from network boundaries to the identities of both human users and autonomous AI agents.LinksCohesityNvidiaSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn

Coronavirus: Fact vs Fiction
Why Everyone's Talking About Mouth Taping

Coronavirus: Fact vs Fiction

Play Episode Listen Later Dec 9, 2025 16:52


There's a new sleep trend making waves: taping your mouth shut at night. Advocates say it can help you breathe better, sleep deeper, and even wake up more energized. But is it safe -- or could it put your health at risk? Sanjay breaks down what the science says about this viral “hack” and what doctors want you to know before you try it. Plus, a listener asks about calcium supplements and testosterone for bone loss.   This episode was produced by Jennifer Lai  Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula Executive Producer: Steve Lickteig Learn more about your ad choices. Visit podcastchoices.com/adchoices

Side Hustle to Small Business
Katherine Klimitas built a career from her childhood passion

Side Hustle to Small Business

Play Episode Listen Later Dec 3, 2025 33:25


Katherine Klimitas is the Founder of KAK Art & Designs, which began as a childhood hobby painting animals for her parents' veterinary clients and has evolved into a thriving business specializing in pet portraits and graphic design. Her innovative approach supports artists and pet lovers alike through personalized artwork and creative design solutions. In this episode of the Side Hustle to Small Business® Podcast, Katherine shares her inspiring journey from using art as a way to engage during childhood limitations to building a successful creative business. She and host Sanjay discuss turning a passion into profit, developing a unique artistic style, and growing a business centered on creativity and connection.   What You'll Learn: • How to turn a personal hobby into a thriving creative business • Strategies for marketing and selling custom artwork • Building a brand that connects with clients emotionally • The story behind KAK Art & Designs and Katherine's unique artistic journey Learn more about KAK Art & Designs at https://kakartnola.com/   Chapters: 00:00 Introduction and background 11:10 Building the business and getting clients 15:00 Finding the proper rates 17:00 Setbacks in the business 19:22 Dealing with rejection 21:55 Balancing life and work 23:46 Engaging with community 25:43 Reflecting on the business 28:40 Technology and app suggestions 30:22 Advice for other entrepreneurs 31:49 Closing and contact   #SmallBusiness #art #graphicdesign   At Hiscox, we provide customized insurance solutions for small businesses and entrepreneurs, empowering you to take risks with confidence. With over 100 years of expertise, we offer coverage options like general liability and professional liability, helping you protect what matters most. Learn more at hiscox.com.

Side Hustle to Small Business
Robert Carnes grows a freelance writing business while working in marketing

Side Hustle to Small Business

Play Episode Listen Later Nov 26, 2025 33:07


Robert Carnes is a freelance copywriter and author who started his copywriting business while working full-time in marketing. What began as a creative outlet has grown into a thriving freelance practice, helping clients craft compelling content and develop their brands.   In this episode of the Side Hustle to Small Business® Podcast, Robert shares his journey from balancing a full-time marketing career with a side hustle to building a successful copywriting business. He and host Sanjay discuss the evolving role of AI in writing, strategies for overcoming imposter syndrome, and the lessons he's learned as an author and entrepreneur.   What You'll Learn: • How to launch a copywriting business alongside full-time work • Leveraging AI tools to enhance creativity and efficiency • Building confidence and overcoming imposter syndrome as a writer • Tips for turning writing skills into multiple revenue streams   Learn more about Robert Carnes at https://www.jamrobcar.com/   Chapters 00:00 Introduction and background 5:52 Scaling the business 8:28 Using automation in the business 10:44 Robert's authorship  19:37 Work-life balance 23:30 Overcoming nerves 31:47 Closing and contact   At Hiscox, we provide customized insurance solutions for small businesses and entrepreneurs, empowering you to take risks with confidence. With over 100 years of expertise, we offer coverage options like general liability and professional liability, helping you protect what matters most. Learn more at hiscox.com.

Coronavirus: Fact vs Fiction
What If Comfort Food Could Help You Live to 100?

Coronavirus: Fact vs Fiction

Play Episode Listen Later Nov 21, 2025 26:23


Could the secret to living longer be as simple as what's for dinner? Sanjay sits down with explorer and bestselling author Dan Buettner to discuss the science behind longevity, why taste (not willpower) drives healthy habits, and how affordable, plant-based recipes inspired by the world's longest-living communities can help you thrive. Plus, hear how AI cracked the code on America's favorite flavor trends -- and inspired the recipes in his new cookbook, The Blue Zones Kitchen: One Pot Meals.  Our show was produced by Jennifer Lai.   Medical Writer: Andrea Kane Showrunner: Amanda Sealy Senior Producer: Dan Bloom Technical Director: Dan Dzula  Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Scuffed Soccer Podcast | USMNT, Yanks Abroad, MLS, futbol in America

Sanjay sits down with Joe at the team hotel in Tampa on Monday afternoon, to discuss his nearly 10,000 Bundesliga minutes of experience, how things have changed under Poch, who from the team he would trust the least to run the family business in Lake Grove, and much more. It's a fast but substantive interview. Skip the ads! Subscribe to Scuffed on Patreon and get all episodes ad-free, plus any bonus episodes. Patrons at $5 a month or more also get access to Clip Notes, a video of key moments on the field we discuss on the show, plus all patrons get access to our private Discord server, live call-in shows, and the full catalog of historic recaps we've made: https://www.patreon.com/scuffedAlso, check out Boots on the Ground, our USWNT-focused spinoff podcast headed up by Tara and Vince. They are cooking over there, you can listen here: https://boots-on-the-ground.simplecast.comAnd check out our MERCH, baby. We have better stuff than you might think: https://www.scuffedhq.com/store Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Coronavirus: Fact vs Fiction
Why Teens Just Can't Quit Nicotine

Coronavirus: Fact vs Fiction

Play Episode Listen Later Oct 24, 2025 25:35


A few years ago, vaping was at the top of every parent's list of worries — including Sanjay's. But in just a few short years, the landscape has shifted again. Teen vaping rates have dropped, but new nicotine products have quickly taken their place. Dr. Pamela Ling, a professor at University of California San Francisco who has spent her career studying the tobacco industry's tactics, joins Dr. Sanjay Gupta to talk about why nicotine remains such a moving target — and how parents can help their kids stay ahead of it. Producer: Jesse Remedios Senior Producer: Dan Bloom Showrunner: Amanda Sealy Technical Director: Dan Dzula Executive Producer: Steve Lickteig Learn more about your ad choices. Visit podcastchoices.com/adchoices