POPULARITY
Categories
Quietmind Astrology — Learn Vedic Astrology with Jeremy Devens
Unlock the full potential of astrology in New Moon Alignment at https://www.quietmindastrology.com/newmoonAstrocartography is the study of how your birth chart shifts when you move to a new location. While your core birth planets remain the same, your rising sign and house placements change, altering how those energies manifest in your daily life. In this episode, I explain why there is no "perfect" place for everyone—only the right place for your current intention. We discuss the mechanics of relocation charts, why vertical movement differs from horizontal movement, and how to use astrology as a support for your intuition rather than a replacement for it. Whether you are seeking spiritual growth, career success, or a hospitable place for a family, understanding your local chart helps you navigate potential obstacles and harness unique opportunities.QUOTES“When you move, you get a different birth chart. It changes where all the planets are, all the signs, all the houses are now different in your chart.” “Astrology is best as a support to your intuition, not a replacement for your intuition.” “Life is like a Google search. It all starts with: how can I help you today? Google is useless until you put in the query. What do you want to cultivate?” “Your main birth chart is always primary—it's 80 to 90% of the equation. Astrocartography is like 10 to 20%.” TIMESTAMPS00:00 How Moving Changes Your Birth Chart 01:14 The First Step: Using a Location-Based Chart Calculator 01:38 The Mechanics of Rising Signs and Horizons 03:08 Interpreting Your New Ascendant Themes 04:58 Why House Placements Are the Biggest Shift 05:31 Identifying Your Intention: Wealth, Health, or Family? 07:45 The Impact of Stelliums in Relocation 09:30 Trusting Your Soul's Calling Over a "Rough" Chart 10:31 Vertical vs. Horizontal Movement on the Globe 11:03 The Role of Transits, Dashas, and Timing 13:57 Planetary Placements: Rahu, Ketu, and Ego Death 14:34 Favorable Placements for Luck and Hard Work 15:40 Power Placements for Career and Success 17:32 Why Two People Have Opposite Experiences in One Place KEYWORDSVedic astrology, astrocartography, relocation chart, Jyotish, rising sign, house placements, spiritual growth, career success, Rahu, Ketu, Jupiter transits.FREE RESOURCES⭐️ Free Birth Chart: http://www.quietmindastrology.com/freebirthchart⭐️ Free Horoscopes: https://www.quietmindastrology.com/freehoroscopes⭐️ Podcast (Spotify, Apple, etc): https://creators.spotify.com/pod/profile/astrology⭐️ Instagram: http://www.instagram.com/quietmindastrology⭐️ YouTube: http://www.quietmindastrology.com/youtubeWORK WITH ME⭐️ Book a Reading: http://www.quietmindastrology.com/reading⭐️ Decode Your Chart: https://www.quietmindastrology.com/101⭐️ Mentorship: http://www.quietmindastrology.com/mentorshipQUIETMIND YOGA⭐️ Yoga Teacher Training Podcast: https://www.anchor.fm/yogateachertrainingNEXT STEP⭐️ Unlock the full potential of astrology in New Moon Alignment at https://www.quietmindastrology.com/newmoon
Send a textIn this powerful episode of Soulful Self-Care Conversations, Pearl welcomes Janet Therese, intuitive guide and mentor for inner alignment, who helps individuals reconnect with their divine presence, clear limiting stories, and step into a more expansive, aligned life.This conversation dives deep into intuition, energy, trauma, alignment, and what it truly means to live from your highest self—not from fear, conditioning, or outdated beliefs.
Are you feeling the rapid transformation of the "Great Shift" toward 2026? Join, Robyn and Colleen Benelli, and returning guest Rosalyn Acosta as they explore the practical "hows" of manifestation and energy healing. Rosalyn talks about how to bridge your spiritual vision into physical form using ethical plant medicine, quantum physics, and "spiritual efficiency" for your daily life. In This Episode, You Will Learn: • Discover the biological science behind smudging with Sage and Palo Santo, including how it affects cortisol and airborne bacteria. • Master the difference between the Horizontal and Vertical realms to better balance your daily life with your energy body. • Explore the Seer, Feeler, and Knower archetypes to identify and manage your unique intuitive gifts and boundaries. • Release collective heaviness by utilizing "spiritual efficiency"—energy hacks designed for busy parents and professionals. • Navigate the "Great Shift" of 2026 by anchoring your authentic light and supporting the next generation of "homoluminous" youth. Connect with Rosalyn: • Book a distance or in-person session: www.livehealtravel.com/heal/ • Upcoming Teen Reiki 1 Training & Rite of Passage: https://livehealtravel.com/teen-training-and-rite-of-passage/ Connect with Colleen & Robyn • Website: ReikiLifestyle.com • Online Classes: Register for upcoming Reiki Training • YouTube: Watch our Video Discussions & Journeys • Instagram: @reikilifestyleofficial Join Our Community • Free Online Distance Reiki Share: Join us every Tuesday from 9:30 am – 11:00 am Pacific Time for a global healing circle. • colleen@reikilifestyle.com For Questions • Free Consultation Call with Danni Love the Show? • If this episode helped you on your journey, please Subscribe and leave a 5-star review on Apple Podcasts or Spotify. • Your support helps us share the gift of Reiki with more people around the world! **DISCLAIMER** This episode is not a substitute for seeking professional medical care but is offered for relaxation and stress reduction, which support the body's natural healing capabilities. Reiki is a complement to and never a replacement for professional medical care. Colleen and Robyn are not licensed professional health care providers and urge you to always seek out the appropriate physical and mental help professional health care providers may offer. Results vary by individual.
It is a privilege to welcome actor Kevin O'Sullivan to The Jake's Take with Jacob Elyachar Podcast. Born in Los Angeles and raised in Potomac, Maryland, he splits his time between New York City and LA. Kevin has enjoyed a long and lucrative career in the entertainment industry. His first major role was on the iconic Beverly Hills, 90210, where he shared the screen with Jennie Garth, Jason Priestley, Ian Ziering, and Brian Austin Green. He also guest-starred on NBC/Peacock's Days of Our Lives. Kevin also starred in Cop Land, where he shared the screen with iconic actors including Sylvester Stallone, Robert De Niro, Harvey Keitel, Janeane Garofalo, Annabella Sciorra, and the late Ray Liotta. He was awarded Best Supporting Actor at the 2024 Egyptian American Film Festival in New York City for his portrayal of Officer John Nelson in The Deal. In addition, he also starred in Tai, Lord Hear Our Prayer, Unveiling Shadow, and the 2024 Tribeca Film Festival-nominated film Nepotism, Baby! Throughout this decade, Kevin starred in various vertical series. Recently, he starred in In Love with My Mom's Boyfriend with Robert Palmer Watkins and The Legal Queen with Ben Schreen. On this edition of The Jake's Take with Jacob Elyachar Podcast, Kevin O'Sullivan spoke about Cop Land's upcoming 30th anniversary, working on numerous vertical series, and previewed Choleric. Become a supporter of this podcast: https://www.spreaker.com/podcast/jake-s-take-with-jacob-elyachar--4112003/support.
La Lista Podcast host Rubén Mendive and writer Thulio DaSilva ditch the formal interviews and hop on the mic for quick, unfiltered conversations about their chaotic creative lives — covering new writing projects, hot takes on industry trends, dating disasters, and whatever discourse the algorithm served them that morning. This week: Thulio's Sniffies hookup story (including a brutal gym neg), writing plays at the LA LGBT Center, Selena Gomez foot-kissing discourse, a long-awaited job update (!!), and Rubén's unhinged thesis that vertical micro-dramas are the next streaming revolution. Instagram - @lalistapodcast Music: Sunny Side - Airstream
Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con
In this episode, recorded live at the OpenAI studio, Sulman Choudhry (Head of ChatGPT @ OpenAI) pulls back the curtain on how they structure engineering teams! We talk about shifting from silos to fluid mission-driven teams, vertical vs. horizontal teams, maximizing cross-functional collaboration between research, engineering, product and design. Plus we cover “directly responsible individuals” for high accountability, managers as systems designers, scaling decision-making to prevent leadership from becoming bottlenecks, frameworks for mentoring junior engineers, why “problem framing” is the most critical skill, and how managers can stay close to problems and maintain technical intuition. ABOUT SULMAN CHOUDHRY Sulman leads ChatGPT Engineering at OpenAI, driving the development and scaling of one of the world's most impactful AI products. He pushes the boundaries of innovation by turning cutting‑edge research into practical, accessible tools that transform how people interact with technology. Previously at Meta, Sulman founded and scaled Instagram Reels, IGTV, and Instagram Labs, and helped lead the early development of Instagram Stories. He also brought MetaAI to Instagram and Messenger, integrating generative AI into experiences used by billions. Earlier in his career, Sulman was on the founding team that built and launched UberEATS from the ground up, helping turn it into a global food delivery platform. With a track record of marrying technical vision, product strategy, and large‑scale execution, Sulman focuses on building products that meaningfully change how people live, work, and connect. This episode is brought to you by xMatters! xMatters automates the entire incident lifecycle with their purpose-built AI powered workflow, giving your team the context they need to stop disruptions before they start and minimize resolution times. Head over to xmatters.com to learn more! SHOW NOTES: The Shift to AI-Native Engineering: How AI is collapsing the "Inner Loop" and reshaping engineering team composition (2:48) Mission-Driven Teams: Moving from traditional functional silos to integrated, problem-centric units (4:45) Vertical vs. Horizontal Team Architecture: How OpenAI structures specialized horizontal teams (ex. Infrastructure, RTC/Voice) with product verticals (7:04) Fluid org charts & blurring functional roles: AI-Native teams require proactive mission alignment and coordination over rigid structure (8:48) The Lifecycle of Problem-Oriented Teams: What happens when a "strike team" solves the problem (10:02) Maximizing cross functional collaboration between engineering, research, product and design (11:52) The DRI Framework: Implementing the "Directly Responsible Individual" model for high-velocity accountability (13:32) Thriving in the "Chaos Factory": Addressing bottlenecks in highly dynamic, high-volume environments (16:02) Prioritization & "Letting 1,000 Flowers Bloom": How OpenAI decides which AI bets to double down on (19:13) Scaling Decision-Making: Preventing leadership from becoming the bottleneck as volume increases (21:19) Knowing when to call it quits on a bet and reallocate talent for maximum impact (23:29) The Manager as "Systems Designer": Shifting the EM role from people logistics to technical orchestration (24:49) The Barbell Talent Strategy: Optimizing for innovation by pairing "super seniors" with "super juniors" (28:10) Mentorship in the AI Age: How to coaching junior engineers when the "cost of code" is approaching zero (30:19) Technical Intuition for Leaders: Sulman's frameworks for staying "close to the metal" as a manager (33:17) Cultivating Judgment: Why "Problem Framing" is the most critical skill for the modern engineer (37:01) Rapid fire questions (38:59) LINKS AND RESOURCES: 99% Invisible](https://99percentinvisible.org/): The design and architecture podcast Sulman has followed for over a decade. The Invisible Cow Tunnels of Chicago](https://99percentinvisible.org/episode/cow-tunnels/): A specific episode of 99% Invisible mentioned by Patrick. This episode wouldn't have been possible without the help of our incredible production team: Patrick Gallagher - Producer & Co-Host Jerry Li - Co-Host Noah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/ Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/ Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Listen and subscribe to Money Making Conversations on iHeartRadio, Apple Podcasts, Spotify, www.moneymakingconversations.com/subscribe/ or wherever you listen to podcasts. New Money Making Conversations episodes drop daily. I want to alert you, so you don’t miss out on expert analysis and insider perspectives from my guests who provide tips that can help you uplift the community, improve your financial planning, motivation, or advice on how to be a successful entrepreneur. Keep winning! Two-time Emmy and three-time NAACP Image Award-winning television Executive Producer Rushion McDonald interviewed Brendan Kaminsky. Founder of B Known Agency, a boutique branding and digital marketing firm specializing in sports and entertainment. Kaminsky shares his journey from consulting, to working at ESPN, to eventually launching his own agency. He discusses helping major personalities like Stephen A. Smith, Jalen Rose, Harrison Barnes, and Rich Eisen develop strong social media identities and storytelling strategies. Brendan explains why he left ESPN after six and a half years—despite the security, prestige, and Disney benefits—to pursue entrepreneurship. He describes how brand building has shifted from traditional media to a landscape where relatability, vertical video, audience engagement, and consistent content matter more than follower counts. He also talks about the pressure of managing public-facing work in real time, the importance of being accessible to high‑profile clients, the rising role of AI in content creation, and how social platforms have become core to modern marketing strategies. Additionally, Brendan shares specific examples of working with Jalen Rose on mixing sports commentary with community-focused storytelling and describes how Rich Eisen’s annual “Run Rich Run” 40‑yard dash evolved into a signature charitable brand moment. The interview closes with insights on relationship-building, authenticity, and visibility—reinforcing that in the digital era, it’s not just “who you know,” but who knows you. PURPOSE OF THE INTERVIEW 1. To highlight Brendan Kaminsky’s entrepreneurial journey McDonald explores how Kaminsky transitioned from a major corporation (ESPN) to founding a successful agency. 2. To educate listeners on the evolving world of branding and digital media Kaminsky explains how branding now depends on relatability, vertical video, and engagement over follower count. 3. To provide actionable guidance for entrepreneurs and creators The interview teaches how consistency, accessibility, and storytelling help build a recognizable digital brand. 4. To show how athletes and media personalities use content to expand influence Brendan walks through real client strategies—from Jalen Rose’s community work to Rich Eisen’s fundraising dash. 5. To explore the role of AI in modern marketing Kaminsky discusses how AI assists with analytics, research, and identifying viral content moments. KEY TAKEAWAYS 1. Relatability drives modern branding People connect with authenticity, not polished promotion. Talk to your audience, not at them. 2. Engagement matters more than follower count Algorithms reward content that resonates, regardless of how many people follow you. A creator with 10,000 followers can hit a million views. 3. Social media requires presence and accessibility High-profile clients expect responsiveness; being available is key to agency success. 4. Vertical video is the new standard Optimizing content for mobile consumption is essential—TV graphics no longer dictate how content is built. 5. AI is an asset, not a threat Kaminsky uses AI for virality scoring, caption suggestions, research, and identifying strong clips from long-form content. 6. Data tells the story Success can be clearly measured through views, engagement, and growth—unlike billboards or traditional media. 7. Use “hot topics” to highlight deeper work For clients like Jalen Rose, trending sports conversations help drive attention to community-focused initiatives like his leadership academy. 8. Brand moments can start from something small Rich Eisen’s 40-yard dash evolved into a signature charity event and content anchor. 9. Entrepreneurship requires trusting your gut He left ESPN without telling anyone beforehand to avoid discouragement—because he felt the pull to build his own vision. 10. Visibility creates opportunity In the digital era, it’s not just who you know—it’s who knows you. NOTABLE QUOTES On entrepreneurship “I trusted my gut… I didn’t tell one person I was leaving ESPN because I didn’t want anyone to make me doubt myself.” On branding “People want to relate to you. They want to get to know you.” “Talk directly to your audience.” On social metrics “It’s become a lot more about engagement and views than total follower number.” On accessibility “You could be the best at your job, but if a client can’t reach you, it doesn’t matter.” On visibility “It’s not about who you know—it’s about who knows you.” On AI “AI is absolutely an asset… it helps us with research, analytics, even virality scoring.” #SHMS #STRAW #BESTSteve Harvey Morning Show Online: http://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.
Design Curious | Interior Design Podcast, Interior Design Career, Interior Design School, Coaching
Have you ever finished a beautiful project… only to realize you have nothing to show for it?I've seen so many talented interior designers pour their heart, time, and creativity into a space — only to walk away without the one thing that helps them book their next client: professional portfolio photos. Without strong website images or scroll-stopping social media visuals, it becomes harder to build trust, showcase your design expertise, and grow your business.In this episode, I'm walking you through how to consistently get portfolio-worthy photos of your interior design projects — even if you're new, working with real-life clients, or unsure how to approach photography contracts, styling, or working with an editorial photographer. Because when you have high-quality interior photography, everything changes — your portfolio strengthens, your brand elevates, and your work finally gets the visibility it deserves.What You'll Learn in This Episode✔️ Secure client permission through photography contracts✔️ Choose editorial over real estate photographers✔️ Style spaces for magazine-quality photos✔️ Capture storytelling photo composition✔️ Plan photography into project expensesRead the Blog >>> Interior Design Photography Tips for Stunning Portfolio PhotosNEXT STEPS:
Most runners don't lose form because they're weak. They lose it because they never trained their nervous system to hold it.If you've ever filmed your running form at the start of a workout and thought, “That looks solid,” only to feel it completely unravel 30 minutes later, this episode is for you. I break down why form falls apart under fatigue, what elite runners actually do before they run to stay smooth, and how you can train your body to default to efficient movement even when you're tired. I walk you through a simple pre-run system that takes just a few minutes, explain the movement phases of running in plain language, and show you how to build better mechanics without obsessing over cues mid-race. This is about training your nervous system, not just thinking about posture, so your body knows what to do automatically when it matters most.Key TakeawaysIf your form only works when you're fresh, it's not trained yet. You need to practice movement patterns before you run so your body defaults to them under fatigue.Vertical bounce, forward lean, heel lift, and knee drive all work together. When you isolate and train each phase, your stride becomes more efficient and powerful.Just a few minutes of focused drills before a run can prime your nervous system. Over time, this makes smooth, strong form feel automatic instead of forcedTimestamps[00:15] What You'll Learn[00:55] The Real Problem With 'Fixing' Form[03:19] How Running Actually Works[04:19] Use This Free Training Plan To Do The 3 Drills Now[05:17] Drill #1: Ankling[06:39] Common Mistakes[08:19] Drill #2: Butt Kicks[09:59] Drill #3: High Knees[12:30] The Arm Swing Phase[13:14] How To Add This Into Your Pre-Runs[14:05] Use This To Do A-SkipsLinks & Learnings
SubtitleIn this Episode Enoch Graham shares practical strategies for growing abundant food in small urban spaces. Drawing on 15 years of gardening in the Rogue Valley of Southern Oregon, Enoch explains how to maximize production in patios, rooftops, and compact yards. He outlines his Nine Keys to Small Space Gardening, covering water systems, sunlight management, container growing, vertical gardening, soil health, and creative use of limited space. The conversation also explores soil biology, organic practices, and why patience, especially during the first year, is essential for long-term garden success.Our Guest: Enoch Graham is the host of the weekend Gardening Talk YouTube show 'Let's Get Growing'. He has interviewed hundreds of the world's top gardening communicators and shares his small space gardening practices on his YouTube channel, the Urban Gardener. He has been growing his urban food garden for 15 years in Southern Oregon's Rogue Valley, utilizing many different spaces from a cemented back patio and to a carport rooftop to grow peppers. He has learned a lot over the years and truly loves sharing his experience with other passionate growers in the gardening community.Key TopicsEnoch GrahamSmall space urban gardeningContainer gardeningDrip irrigation and drip tapeRainwater capture and alternative water sourcesSunlight management in urban environmentsVertical gardening and trellisingLayered planting systemsSoil health and organic soil buildingCompost and organic matterBiochar in soil mixesOMRI-certified organic soil productsNo-till container gardeningRogue Valley, Southern OregonQuestions AnsweredWhat are the most important factors for growing food in small urban spaces?Enoch outlines nine key principles that guide successful small-space gardening: reliable water access, adequate sunlight, containers, vertical growing, layered planting, soil management, and creative use of available spaces.How can urban gardeners secure a reliable water supply?Gardeners should start by identifying nearby water sources such as hose spigots, rain barrels, gray water systems, condensation capture, or stormwater runoff. Consistent watering is essential, especially in container gardens where soil dries quickly.What irrigation methods work best for small gardens?Hand watering allows gardeners to observe plant health closely. However, automated drip irrigation systems or drip tape with timers are helpful when gardeners are away or during hot summer months.How do buildings and urban structures affect sunlight?Walls, fences, and tall buildings can create heavy shade. Gardeners should observe how sunlight moves through the space during the day and select shade-tolerant crops when necessary.Why are containers essential in urban gardens?Containers allow gardening on patios, rooftops, and paved surfaces. Larger containers—typically five gallons or more—help maintain moisture and support stronger plant growth compared to smaller pots.How can vertical growing increase productivity?Trellising vining crops like tomatoes, peas, beans, cucumbers, and even melons allows gardeners to grow upward instead of outward, maximizing limited square footage.What does layering mean in a garden system?Layering involves growing plants at different heights—similar to a food forest—so taller plants capture sunlight above while shade-tolerant plants grow beneath them.Why is soil management especially important in container gardening?Container soil must provide structure, drainage, nutrients, and living biology. Good mixes often include compost, coco coir, vermiculite or perlite, and organic amendments.Why might a container garden struggle in its first year?New soil takes time to develop microbial life and balance. Gardeners should expect improvement in subsequent seasons as soil biology develops.How can gardeners maintain healthy container soil long-term?Instead of replacing soil each year, gardeners can treat containers like no-till systems by simply adding compost annually to replenish organic matter and nutrients.Episode HighlightsSuccessful small-space gardening starts with reliable water access and consistent irrigation.Urban shade patterns require careful observation before choosing crops.Five-gallon containers or larger help stabilize moisture and support plant growth.Vertical trellising dramatically increases yield per square foot.Layering plants mimics natural ecosystems and maximizes sunlight use.Healthy soil contains dirt, air space, water, organic matter, and living organisms.OMRI-certified products help maintain organic growing practices.Container soil improves over time as microbial life develops and compost is added annually.Calls to Action & ResourcesDrip Tape Class — Learn irrigation techniques taught each March by Urban FarmUrban Gardener YouTube Channel — Enoch Graham shares small-space gardening practices - https://www.youtube.com/@theUrbanGardenerOMRI Organic Certification — https://www.omri.orgVisit www.urbanfarm.org/TreasureYourGarden for the show notes on this episode, and access to our full podcast library! Need a little bit of advice or just a feedback on your design for your yard or garden?The Urban Farm Team is offering consults over the phone or zoom. Get the benefits of a personalized garden and yard space analysis without the cost of trip charges. You can chat with Greg or choose one of the senior members of our Urban Farm team to get permaculture based feedback.Click HERE to learn more!*Disclosure: Some of the links in our podcast show notes and blog posts are affiliate links and if you go through them to make a purchase, we will earn a nominal commission at no cost to you. We offer links to items recommended by our podcast guests and guest writers as a service to our audience and these items are not selected because of the commission we receive from your purchases. We know the decision is yours, and whether you decide to buy something is completely up to you.
Después de cerrar la serie Una Iglesia Viva, iniciamos una nueva miniserie llamada Entrenamiento Espiritual. Porque si la iglesia está viva… necesita entrenarse. La Biblia nos dice en 1 Timoteo 4:8 que el ejercicio físico es bueno, pero el entrenamiento espiritual es mucho mejor, porque tiene beneficios eternos. Y eso nos lleva a una pregunta clave: ¿Estamos entrenando solo nuestro cuerpo… o también nuestro espíritu? En esta primera enseñanza hablamos de “La Vertical”, es decir, nuestra dirección hacia Dios. Así como nadie logra una rutina avanzada sin práctica, disciplina y guía, tampoco crecemos espiritualmente solo con teoría. Necesitamos entrenar. Jesús nos mostró el modelo. Y en esta sesión aprendemos tres prácticas esenciales para fortalecer nuestra relación con Él: ✨ Oración – Conversación real que transforma el corazón. Palabra – Meditar y vivir lo que Dios nos habla. Alabanza – Un estilo de vida agradecido, más allá de las circunstancias. El crecimiento espiritual no sucede de la noche a la mañana. Es un proceso progresivo. Es santificación. Es transformación interior. Es permitir que Dios haga en nosotros lo que no podemos hacer solos. No se trata de hacer más para que Dios nos ame. Se trata de entrenarnos para vivir conscientes de que ya tenemos acceso a Él. Si quieres fortalecer tu espíritu, renovar tu mente y vivir una fe práctica y profunda, esta enseñanza es para ti. Tienes alguna pregunta, escríbenos a hola@somosviva.org Síguenos para disfrutar de todo nuestro contenido y acompáñanos de manera presencial, cada domingo, 10:00 AM. ©️Iglesia Cristiana Viva Cra. 22 164 24 Brr. Toberín Bogotá, Colombia Si quieres conocer más acerca de nuestra comunidad visita nuestro sitio web: www.somosviva.org
This week's episode begins with a moment of celebration because Sakshi had a birthday! Wooo! The girls went to a cricket match with the noble intention of rage-baiting men but things took an unexpected turn and Naina ended up becoming the national crush (which to be fair enraged a lot of men, so mission accomplished). From there we spiralled into a very real conversation about men stealing women's work, taking credit for their ideas, and the mysterious ability some guys have to dim a girl's shine the moment she starts doing a little toooo well for their liking. We asked the important question: how do you go to bed at night after stealing someone else's work?? Our heartfelt condolences to the girl who had her cancer research stolen and an even bigger sorry to the girl who had her instagram bio stolen.This led us to the ultimate dilemma: your man vs your career. We debated ambition, insecurity, supportive partners, unsupportive partners, and why women are so often expected to shrink themselves to make other people comfortable. Somewhere in the middle of all this, we reminded you (and us) that you just gotta do what you want, no matter what people say - because once you make it, they WILL come around. P.S: Sakshi has a new iPad but we still don't have 100k subscribers. It's very un-feminist of you to not hit subscribe so go do it now, along with liking commenting hyping following and adding our names to your final will and testament.Chaptering:00:00 – Intro, do you really need this01:33 – Naina went viral on X, became the national crush, and announced her pregnancy02:29 – What really went down in Chennai05:25 – All the planning & plotting behind Sakshi's birthday12:00 – The many times we overextended ourselves12:34 – Women's careers and men… with credit-stealing stories15:37 – The essay that had Sakshi fuming21:22 – Cancelling a meeting for a partner26:00 – Men vs women's résumés31:36 – A hack for all the men33:12 – Why we said we still needed a man in our life37:00 – OTPs becoming the next power play49:40 – Sharing the household load53:00 – The letters wrote in teens01:02:44 – Supportive parents01:04:29 – Therapy sessions, but the results 01:06:09 – Sakshi's jokes and her BF 01:08:37 – Defining what “normal” meant01:09:43 – Like, Share, Comment & SubscribeBrutally Honest Creators - https://youtube.com/playlist?list=PLHkcqImp8gcbZHzn1secwSYYKG8dds437&si=wYCafRcBIKDy0BDCComedians Unfiltered- https://youtube.com/playlist?list=PLHkcqImp8gcabWOmtiYQUUXGU4ptrq9HB&si=sWm2ep8LZr8GU_7cFollow MoS on Instagram:https://www.instagram.com/momentofsilencepod/reels/?hl=enCredits:Naina Bhan - Co-host and certified overthinkerhttps://www.instagram.com/nainabee?ig...Sakshi Shivdasani - Co-host, balancing out Naina's overthinking with a healthy dose of not thinkinghttps://www.instagram.com/sakshishivdasani/?hl=enSenior Producer- Amruta P. https://www.linkedin.com/in/amruta-bandivdekar-01879925Produced by "Vertical by Handmade" - Our personal cheering squad https://www.instagram.com/verticalbyhandmade?igsh=NG1vdXd5bWdsdWI3Creative direction by Tinkre, Keeper of MoS' signature “Pookie” energy Natascha Mehra https://www.instagram.com/tinkre.in/?hl=en https://www.instagram.com/natascha.zip/?hl=en Researched by our very own curiosity engineer - Aashna Sharma https://www.linkedin.com/in/aashna-sharma-913146179Reel Editor - Yug Vermahttps://www.instagram.com/bass_abhiyug?igsh=MnlibHdsbG56MjNl&utm_source=qrDisclaimer: The views and opinions expressed on this podcast are for entertaining purposes only and do not necessarily reflect those of the hosts, the production team, or affiliated brand. We don't claim to be experts- just two people with Wi-fi and feelings. While we encourage open dialogue, we do not guarantee the accuracy, completeness, or reliability of any information shared. Listener discretion is advised — especially if you're allergic to strong opinions.
Listen and subscribe to Money Making Conversations on iHeartRadio, Apple Podcasts, Spotify, www.moneymakingconversations.com/subscribe/ or wherever you listen to podcasts. New Money Making Conversations episodes drop daily. I want to alert you, so you don’t miss out on expert analysis and insider perspectives from my guests who provide tips that can help you uplift the community, improve your financial planning, motivation, or advice on how to be a successful entrepreneur. Keep winning! Two-time Emmy and three-time NAACP Image Award-winning television Executive Producer Rushion McDonald interviewed Brendan Kaminsky. Founder of B Known Agency, a boutique branding and digital marketing firm specializing in sports and entertainment. Kaminsky shares his journey from consulting, to working at ESPN, to eventually launching his own agency. He discusses helping major personalities like Stephen A. Smith, Jalen Rose, Harrison Barnes, and Rich Eisen develop strong social media identities and storytelling strategies. Brendan explains why he left ESPN after six and a half years—despite the security, prestige, and Disney benefits—to pursue entrepreneurship. He describes how brand building has shifted from traditional media to a landscape where relatability, vertical video, audience engagement, and consistent content matter more than follower counts. He also talks about the pressure of managing public-facing work in real time, the importance of being accessible to high‑profile clients, the rising role of AI in content creation, and how social platforms have become core to modern marketing strategies. Additionally, Brendan shares specific examples of working with Jalen Rose on mixing sports commentary with community-focused storytelling and describes how Rich Eisen’s annual “Run Rich Run” 40‑yard dash evolved into a signature charitable brand moment. The interview closes with insights on relationship-building, authenticity, and visibility—reinforcing that in the digital era, it’s not just “who you know,” but who knows you. PURPOSE OF THE INTERVIEW 1. To highlight Brendan Kaminsky’s entrepreneurial journey McDonald explores how Kaminsky transitioned from a major corporation (ESPN) to founding a successful agency. 2. To educate listeners on the evolving world of branding and digital media Kaminsky explains how branding now depends on relatability, vertical video, and engagement over follower count. 3. To provide actionable guidance for entrepreneurs and creators The interview teaches how consistency, accessibility, and storytelling help build a recognizable digital brand. 4. To show how athletes and media personalities use content to expand influence Brendan walks through real client strategies—from Jalen Rose’s community work to Rich Eisen’s fundraising dash. 5. To explore the role of AI in modern marketing Kaminsky discusses how AI assists with analytics, research, and identifying viral content moments. KEY TAKEAWAYS 1. Relatability drives modern branding People connect with authenticity, not polished promotion. Talk to your audience, not at them. 2. Engagement matters more than follower count Algorithms reward content that resonates, regardless of how many people follow you. A creator with 10,000 followers can hit a million views. 3. Social media requires presence and accessibility High-profile clients expect responsiveness; being available is key to agency success. 4. Vertical video is the new standard Optimizing content for mobile consumption is essential—TV graphics no longer dictate how content is built. 5. AI is an asset, not a threat Kaminsky uses AI for virality scoring, caption suggestions, research, and identifying strong clips from long-form content. 6. Data tells the story Success can be clearly measured through views, engagement, and growth—unlike billboards or traditional media. 7. Use “hot topics” to highlight deeper work For clients like Jalen Rose, trending sports conversations help drive attention to community-focused initiatives like his leadership academy. 8. Brand moments can start from something small Rich Eisen’s 40-yard dash evolved into a signature charity event and content anchor. 9. Entrepreneurship requires trusting your gut He left ESPN without telling anyone beforehand to avoid discouragement—because he felt the pull to build his own vision. 10. Visibility creates opportunity In the digital era, it’s not just who you know—it’s who knows you. NOTABLE QUOTES On entrepreneurship “I trusted my gut… I didn’t tell one person I was leaving ESPN because I didn’t want anyone to make me doubt myself.” On branding “People want to relate to you. They want to get to know you.” “Talk directly to your audience.” On social metrics “It’s become a lot more about engagement and views than total follower number.” On accessibility “You could be the best at your job, but if a client can’t reach you, it doesn’t matter.” On visibility “It’s not about who you know—it’s about who knows you.” On AI “AI is absolutely an asset… it helps us with research, analytics, even virality scoring.” #SHMS #STRAW #BESTSupport the show: https://www.steveharveyfm.com/See omnystudio.com/listener for privacy information.
Listen and subscribe to Money Making Conversations on iHeartRadio, Apple Podcasts, Spotify, www.moneymakingconversations.com/subscribe/ or wherever you listen to podcasts. New Money Making Conversations episodes drop daily. I want to alert you, so you don’t miss out on expert analysis and insider perspectives from my guests who provide tips that can help you uplift the community, improve your financial planning, motivation, or advice on how to be a successful entrepreneur. Keep winning! Two-time Emmy and three-time NAACP Image Award-winning television Executive Producer Rushion McDonald interviewed Brendan Kaminsky. Founder of B Known Agency, a boutique branding and digital marketing firm specializing in sports and entertainment. Kaminsky shares his journey from consulting, to working at ESPN, to eventually launching his own agency. He discusses helping major personalities like Stephen A. Smith, Jalen Rose, Harrison Barnes, and Rich Eisen develop strong social media identities and storytelling strategies. Brendan explains why he left ESPN after six and a half years—despite the security, prestige, and Disney benefits—to pursue entrepreneurship. He describes how brand building has shifted from traditional media to a landscape where relatability, vertical video, audience engagement, and consistent content matter more than follower counts. He also talks about the pressure of managing public-facing work in real time, the importance of being accessible to high‑profile clients, the rising role of AI in content creation, and how social platforms have become core to modern marketing strategies. Additionally, Brendan shares specific examples of working with Jalen Rose on mixing sports commentary with community-focused storytelling and describes how Rich Eisen’s annual “Run Rich Run” 40‑yard dash evolved into a signature charitable brand moment. The interview closes with insights on relationship-building, authenticity, and visibility—reinforcing that in the digital era, it’s not just “who you know,” but who knows you. PURPOSE OF THE INTERVIEW 1. To highlight Brendan Kaminsky’s entrepreneurial journey McDonald explores how Kaminsky transitioned from a major corporation (ESPN) to founding a successful agency. 2. To educate listeners on the evolving world of branding and digital media Kaminsky explains how branding now depends on relatability, vertical video, and engagement over follower count. 3. To provide actionable guidance for entrepreneurs and creators The interview teaches how consistency, accessibility, and storytelling help build a recognizable digital brand. 4. To show how athletes and media personalities use content to expand influence Brendan walks through real client strategies—from Jalen Rose’s community work to Rich Eisen’s fundraising dash. 5. To explore the role of AI in modern marketing Kaminsky discusses how AI assists with analytics, research, and identifying viral content moments. KEY TAKEAWAYS 1. Relatability drives modern branding People connect with authenticity, not polished promotion. Talk to your audience, not at them. 2. Engagement matters more than follower count Algorithms reward content that resonates, regardless of how many people follow you. A creator with 10,000 followers can hit a million views. 3. Social media requires presence and accessibility High-profile clients expect responsiveness; being available is key to agency success. 4. Vertical video is the new standard Optimizing content for mobile consumption is essential—TV graphics no longer dictate how content is built. 5. AI is an asset, not a threat Kaminsky uses AI for virality scoring, caption suggestions, research, and identifying strong clips from long-form content. 6. Data tells the story Success can be clearly measured through views, engagement, and growth—unlike billboards or traditional media. 7. Use “hot topics” to highlight deeper work For clients like Jalen Rose, trending sports conversations help drive attention to community-focused initiatives like his leadership academy. 8. Brand moments can start from something small Rich Eisen’s 40-yard dash evolved into a signature charity event and content anchor. 9. Entrepreneurship requires trusting your gut He left ESPN without telling anyone beforehand to avoid discouragement—because he felt the pull to build his own vision. 10. Visibility creates opportunity In the digital era, it’s not just who you know—it’s who knows you. NOTABLE QUOTES On entrepreneurship “I trusted my gut… I didn’t tell one person I was leaving ESPN because I didn’t want anyone to make me doubt myself.” On branding “People want to relate to you. They want to get to know you.” “Talk directly to your audience.” On social metrics “It’s become a lot more about engagement and views than total follower number.” On accessibility “You could be the best at your job, but if a client can’t reach you, it doesn’t matter.” On visibility “It’s not about who you know—it’s about who knows you.” On AI “AI is absolutely an asset… it helps us with research, analytics, even virality scoring.” #SHMS #STRAW #BESTSee omnystudio.com/listener for privacy information.
Superpowers for Good should not be considered investment advice. Seek counsel before making investment decisions. When you purchase an item, launch a campaign or create an investment account after clicking a link here, we may earn a fee. Engage to support our work.Watch the show on television by downloading the e360tv channel app to your Roku, LG or AmazonFireTV. You can also see it on YouTube.Devin: What is your superpower?Kevin: Persistence and flexibility.What if we could double the energy output of existing wind farms without using more land? Kevin Wolf, CEO and Co-founder of Wind Harvest, has been working to make this vision a reality. His team has developed vertical axis wind turbines that harvest turbulent winds near the ground, a resource previously considered unusable.The idea isn't new; vertical axis turbines have been attempted for decades but failed to overcome engineering challenges. “The turbulent wind has stopped all other technologies from being able to make use of it,” Kevin explained. “It took a long time, a lot of money, and a lot of prototyping of full-scale prototypes…to finally have this product ready for the market.”The key innovation is a patented hinge system that solves a critical weakness of vertical axis turbines—mechanical stress on the blades. “Vertical axis turbines have failed for decades,” Kevin said. “The problem is they rotate 15 million times a year…every rotation, there's a pull with a blade like an airplane wing and a push as it comes around the other side. That connection point breaks…[but] if you put a hinge, all that micro movement is taken up in the hinge.”The potential impact of Wind Harvest's turbines is enormous. By placing them on existing wind farms, they can double the energy output per acre in ideal locations without requiring new infrastructure or land. “We can double the wind farm energy output with our turbines,” Kevin emphasized. He noted that this approach also avoids the costs and environmental impacts of developing new wind farms. “It's a much faster way of developing new wind farms…as opposed to taking raw land, new habitat, and converting that into a wind farm.”Wind Harvest is raising capital through a regulated crowdfunding campaign on StartEngine. This effort allows small investors to support clean energy innovation. Kevin explained the unique challenge of funding such groundbreaking work: “There is a lot of doubt at the level of the venture capitalists…they want us to finish the third-party certification.” In the meantime, crowdfunding has allowed Wind Harvest to bridge the gap and move closer to commercialization.You can find more about their campaign on StartEngine (Top 15 in amount raised on StartEngine.) and be part of an investment opportunity to drive clean energy forward.tl;dr:Kevin Wolf's vertical axis wind turbines harvest turbulent winds near the ground, doubling wind farm efficiency.Wind Harvest's patented hinge system solves a key flaw in traditional vertical axis turbines.Deploying these turbines on existing wind farms reduces costs, accelerates permitting, and avoids new land use.Kevin attributes his success to persistence, flexibility, and consensus-building skills honed over decades.Wind Harvest is raising capital via crowdfunding to finalize certification and commercialize their turbines.How to Develop Persistence and Flexibility As a SuperpowerKevin attributes his success to persistence and flexibility. “I do not like to not have something succeed,” he shared, adding, “I've learned over time that big things take a long time to do.” His training as a river guide and evolutionary ecologist shaped his ability to adapt. “You learn to take new data, change your mind, adjust your hypothesis,” he explained. By pairing this adaptability with relentless persistence, Kevin has overcome significant obstacles in his career.Kevin shared a story from his days at Wolfen Associates, where he helped the city of Sausalito reach consensus on a fire department expansion. After the city's initial proposal failed, Kevin facilitated community meetings, allowing dissenting voices to be heard and their concerns addressed. By incorporating feedback and revising the proposal, the city council gained overwhelming public support in a re-vote. This experience highlights Kevin's ability to persist through challenges while remaining flexible to new perspectives.Tips for Developing the Superpower:Embrace persistence by committing to long-term goals, even when progress feels slow.Stay flexible by adapting to new data and revising plans when necessary.Practice active listening to fully understand others' perspectives.Help others clarify their thoughts by rephrasing and restating their concerns.By following Kevin's example and advice, you can make persistence and flexibility a skill. With practice and effort, you could make it a superpower that enables you to do more good in the world.Remember, however, that research into success suggests that building on your own superpowers is more important than creating new ones or overcoming weaknesses. You do you!Invest in Wicked-Fast Coffee!Guest ProfileKevin Wolf (he/him):CEO and Co-founder, Wind Harvest InternationalAbout Wind Harvest International: Wind Harvest is a U.S.-based renewable energy technology company developing and selling industrial-scale vertical-axis wind turbine (VAWT) systems designed for deployment in turbulent wind resources close to the ground. ts patented Wind Harvester technology is engineered to operate efficiently in a wide range of wind conditions, with a compact footprint, low profile, and highly durable design. Website: windharvest.comCompany Facebook Page: facebook.com/windharvest Other URL: startengine.com/offering/wind-harvestBiographical Information: Kevin Wolf is the co-founder and CEO of Wind Harvest International, where he's spent nearly two decades advancing utility-scale vertical-axis wind turbines for turbulent, mid-level winds. He facilitated the engineering team through Technology Readiness Level milestones, oversaw R&D on the coupled-vortex effect, hired key engineers and team members and helped take the company from early grants and Series A through testing of multiple prototypes. Starting in 2019 when he became CEO again, he has steered capital strategy—closing investor rounds, running multiple crowdfunding raises, converting company debt to equity in 2022, and completing audits.Beyond Wind Harvest, Wolf brings a long record of environmental leadership and civic work. A UC Davis graduate in Evolution & Ecology, he launched his career with Friends of the River and later founded Wolf & Associates, facilitating multi-stakeholder, consensus-based watershed and environmental solutions across California. He chairs the California Clean Money Action Fund to reduce the power of money in affecting elections and legislation, co-founded the pioneering N Street Cohousing community in Davis, and has served on numerous local boards and commissions.LinkedIn Profile: linkedin.com/in/kevin-wolfPersonal Facebook Profile: facebook.com/kevin.wolf.9256Support Our SponsorsOur generous sponsors make our work possible, serving impact investors, social entrepreneurs, community builders and diverse founders. Today's advertisers include rHealth, and p!ng. Learn more about advertising with us here.Max-Impact Members(We're grateful for every one of these community champions who make this work possible.)Brian Christie, Brainsy | Cameron Neil, Lend For Good | Carol Fineagan, Independent Consultant | Hiten Sonpal, RISE Robotics | John Berlet, CORE Tax Deeds, LLC. | Justin Starbird, The Aebli Group | Lory Moore, Lory Moore Law | Mark Grimes, Networked Enterprise Development | Matthew Mead, Hempitecture | Michael Pratt, Qnetic | Mike Green, Envirosult | Nick Degnan, Unlimit Ventures | Dr. Nicole Paulk, Siren Biotechnology | Paul Lovejoy, Stakeholder Enterprise | Pearl Wright, Global Changemaker | Scott Thorpe, Philanthropist | Sharon Samjitsingh, Health Care Originals | Add Your Name HereUpcoming SuperCrowd Event CalendarIf a location is not noted, the events below are virtual.Superpowers for Good Live Pitch – Private Investor Session: Immediately following the March 17, 2026, live broadcast at 8 PM ET / 5 PM PT, investors are invited to join an exclusive private Zoom session to engage directly with the presenting founders—BRG Therapeutics (Dale Walker), GigaWatt (Deep Patel), My Diabetes Health (Dr. Prem Sahasranam), and rHEALTH (Eugene Chan). In this dedicated off-air environment, participants can ask deeper questions about strategy, traction, deal terms, and impact while exploring their active Regulation Crowdfunding campaigns in real time. Watch the live pitches on Roku, Amazon Fire TV, LG Smart TVs via e360tv, LinkedIn, YouTube, or Facebook—then continue the conversation in the private investor session where capital and clarity come together. Register free to get access to both events.SuperCrowd Impact Member Networking Session: Impact (and, of course, Max-Impact) Members of the SuperCrowd are invited to a private networking session on March 17th at 1:30 PM ET/10:30 AM PT. Mark your calendar. We'll send private emails to Impact Members with registration details. Upgrade to Impact Membership today!SuperCrowdHour March: This month, Devin Thorpe will explore how investors can align profit with purpose in a powerful session titled “Why You Should Make Money with Impact Crowdfunding.” As CEO and Founder of The Super Crowd, Inc., Devin will share practical insights on generating financial returns while driving measurable social and environmental impact through regulated investment crowdfunding. Register free to get all the details. March 18th at Noon ET/9:00 PT.SuperCrowd26 featuring PurposeBuilt100™: This August 25–27, founders, investors, and ecosystem leaders will gather for a three-day, broadcast-quality global experience focused on disciplined capital formation, regulated investment crowdfunding, and purpose-driven growth. We're bringing together leading voices in impact investing, compliance, digital marketing, and circular economy innovation to deliver practical frameworks, real-world case studies, and actionable strategies. The event culminates in the PurposeBuilt100™ Showcase, recognizing 100 of the fastest-growing purpose-driven companies in the U.S. Register now to secure your seat and get all the details. August 25–27, streaming worldwide.Community Event CalendarSuccessful Funding with Karl Dakin, Tuesdays at 10:00 AM ET - Click on Events.If you would like to submit an event for us to share with the 10,000+ changemakers, investors and entrepreneurs who are members of the SuperCrowd, click here.Manage the volume of emails you receive from us by clicking here.We use AI to help us write compelling recaps of each episode. Get full access to Superpowers for Good at www.superpowers4good.com/subscribe
How to Trade Stocks and Options Podcast by 10minutestocktrader.com
Are you looking to save time, make money, and start winning with less risk? Then head to https://www.ovtlyr.com.Ever notice how the biggest market moves usually start with a story before anyone even realizes what is happening?That is what this conversation is all about.In this episode, Chris sits down with Shane to talk through how narratives form in the market and how traders can turn those narratives into real trade opportunities. The discussion starts with a fascinating topic that many investors are just starting to hear about: photonics.Instead of traditional electronic systems transferring data inside massive AI data centers, photonics uses lasers to move information dramatically faster while also helping manage the heat produced by powerful chips. In some cases, these systems can transmit data up to 100 times faster than traditional electronic methods. If AI demand continues exploding the way many expect, technologies like this could become a major investing theme.But this conversation is not just about a single technology. It is really about how traders think.Chris walks through why a strong story alone is never enough in the market. A narrative might spark the idea, but price action and signals still have to confirm the move. That is where tools like OVTLYR come in, helping traders cut through the noise and focus on moments when the market is actually moving.Along the way, the discussion touches on several emerging themes that traders are starting to watch closely:✅ Why photonics may become a major driver behind AI infrastructure✅ How narratives in sectors like titanium and rare metals can move stocks✅ Why seasonal trends like fertilizer demand can create opportunities✅ How OVTLYR signals help confirm when a setup is actually worth trading✅ Why price action always has the final say in the marketThe big takeaway is simple.Stories may start the fire in the market. But price and momentum are what tell you when the move is real.If understanding how narratives, sectors, and technical signals come together in real trading sounds interesting, this is a conversation worth watching all the way through.Subscribe to OVTLYR for disciplined trading strategies that actually make sense.
Men-E-Men Stüdyo tarafından hazırlanan iki yüz on sekizinci bölüm sizlerle.Kısa bir “Brit Awards” özeti yaparak başladık bu bölüme. Ardından yapay zeka ile oluşturulan videolardan, bu videolarda kullanılan ünlülerden, karakterlerden söz ettik. Telif konusuna tekrar değindik.Sonrasında, çok popülerleşen yeni bir dizi formatından bahsettik. Kısa süreli bölümlerden oluşan, izlemesi kolay, dikey format dizilerin hızlı yükselişini değerlendirdik.Konu dizilerden açılınca, Amerika Birleşik Devletleri'nin en uzun soluklu animasyon dizisinin geçtiğimiz haftalarda kutladığı bir yıl dönümünden yola çıkarak, dizinin başarılarını konuştuk.
OneCrew is building end-to-end operational software for asphalt and concrete contractors—a segment caught between Procore's general contractor focus and ServiceTitan's field services model. After leaving Bain & Company and Google, Ari Bleemer and his co-founder Max identified that self-performing specialty contractors who handle everything from estimating to payment collection had no purpose-built platform. In this episode, Ari shares how they've spent four and a half years building trust in an industry skeptical of software promises, why they resisted the urge to expand horizontally across multiple construction trades, and what they learned about sustainable vertical SaaS growth.Topics Discussed:How the middle segment of construction—self-performing contractors who run the full project lifecycle—remains structurally underservedBuilding trust in a market burned by consultants promising custom software for $10,000 that never worksWhy every employee at OneCrew, regardless of function, goes through industry-specific onboarding to learn paving terminology and contractor workflowsThe strategic decision to delay expansion into adjacent verticals despite having configurable product architectureHow sustained market presence compounds credibility faster than any go-to-market tacticGTM Lessons For B2B Founders:Map the white space between dominant platforms: OneCrew identified that Procore owns general contractors coordinating multiple trades, while ServiceTitan and others own single-visit field services. The gap: specialty contractors executing complete projects—estimating, proposing, executing, and collecting payment. Ari describes it as "the entire middle of the industry where you have a lot of self perform contractors, specialty contractors, trade contractors, subcontractors...that are actually running a process from start to end." Map your market by understanding what established platforms actually serve versus claim to serve, then target the operational workflows that fall through the cracks.Use "niche" skepticism as market validation: When VCs, friends, and family question if your market is too narrow, you've likely found defensible positioning. Ari's test: "Have you been on a sidewalk today? Have you driven on a road today? Have you been in a parking lot today?" The paving industry powers daily infrastructure but gets zero attention from horizontal software players or large AI companies. Founders should seek markets where usage is ubiquitous but mindshare and software investment are minimal—that's where you build sustainable moats.Make product fluency a company-wide competency: OneCrew requires every hire—engineers, sales, operations—to learn paving industry terminology, contractor pain points, and workflow nuances during onboarding. This isn't just sales training; it's embedding industry context into product decisions, customer conversations, and roadmap prioritization. The payoff: "Contractors come up to us and say like, it feels like you guys actually get it, which there's no better compliment for us." In vertical SaaS, domain expertise distributed across the entire company drives faster iteration cycles and deeper customer trust than any single "industry expert" hire.//Sponsors:Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.ioThe Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co//Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Health Affairs' Rob Lott interviews Derek T. Lake on his recent paper exploring new research on Optum's acquisitions, finding the company tended to buy physician practices already using ambulatory surgery centers and that its ASC acquisitions were followed by higher prices for competing insurers.Order the February 2026 issue of Health Affairs.Currently, more than 70 percent of our content is freely available - and we'd like to keep it that way. With your support, we can continue to keep our digital publication Forefront and podcast
Show Notes 0:00: Justin and Helen finally are able to talk about things they’ve been up to!…Well, Justin’s been up to finally watching the new season of Medalist now that it’s on Hulu and experienced paying to go to a theater for the first time since COVID-19 (all for Uma Musume: Beginning of a New Era). Helen on the other hand has finally watched Journal with Witch (2 episodes so far) like the rest of the cool people! The hosts then get ready to talk about the news over the past few weeks. And it begins with one of the worst anime/manga news that’s ever been covered on this podcast. News 6:13: Shogakukan’s Manga ONE editorial department issued a statement and an apology this past Friday regarding manga creator Shōichi Yamamoto, after the editorial department had allowed Yamamoto to publish a new manga on the service under a pen name after he had been arrested and convicted of a sex crime. The details are not only many but it also can be triggering. It’s fairly staggering, so you’ll want to check out Anime News Network and Strict Algorithm for all the details, but a quick summary: Shoichi Yamamoto returned as a writer for Joujin Kamen, with Eri Tsuruyoshi drawing the series in 2022, despite his arrest and conviction of a sex crime in 2020 and making sure the artist was not aware of Yamamoto’s past since he is now known as Hajime Ichiro. An editor for MangaOne was involved in this situation, even going so far as to try and strike a deal with the victim. Once details emerged this past Friday of how awful the crime was and that the publication covered it up, many manga artists — from those working for Shogakukan to those not working for them — were angry and for those working for Shogakukan demanded not only a proper response but to have their works removed from MangaOne. Then Saturday, Shogakukan announced they will set up an investigative committee that will include lawyers to clarify the facts of the situation. Both hosts discuss this horrifying scandal as they know it and what this ultimately says about Shogakukan (21:10) — and as Justin expected (26:48), more news would eventually emerge over time after the episode was recorded on Sunday, and yesterday, while continuing their internal investigation of MangaOne, Shogakukan revealed that Tatsuya Matsuki, the writer for act-age who was convinced of a sex crime in 2020 and dismissed from Shonen Jump with the cancellation of that manga, was hiding under the pen name of Miki Yatsunami while working on a manga on the service (Seisō no Shinri-shi). We at TheOASG send our apologies to the victim, Eri Tsuruyoshi, and those affected by this situation. 26:57: Media Do, considered the largest e-book distributor in Japan (and sold off their shares of MyAnimeList last year) has acquired Seven Seas Entertainment for US $80 million dollars. More details emerged since the two hosts talked about it on Sunday, and it was concerning to where Seven Seas had to put out a statement yesterday. Expect both hosts to again discuss more about this partnership in the next episode. 33:29: The two hosts discuss the “Ring Ring Live in Osaka” concert event put on by the Himitsu no AiPri staff was cancelled due to threats; Helen goes over Sho-Pro Books (Shogakukan-Shueisha Productions) announcing that its contract to publish Marvel Comics titles in Japanese will end on March 31; and Manga Mavericks Books now has a distribution deal with Pathway Book Service and with Gazelle Book Services Ltd for Europe so we’ll be able to see their print books on certain retailers soon. 39:27: The Gift-o’-Animation studio’s founder and former president Satoshi Mori passed way February 20 after battling an illness for some time; Talent agency Haikyō announced in February that voice actor Masaru Ikeda died on January 31; And Kodansha shared on their Instagram that Vertical publishing company’s co-founder and Kodansha USA Editorial Director Ioannis Mentzas passed away a few weeks ago. Licenses 41:55: Last episode the hosts talked about a bunch of companies licensing manga. This episode the two about more things getting licensed by companies, first with VIZ: Hyuganatsu, Minoji Kurata, & Touko Shino's The Apothecary Diaries: Maomao's Notes on the Inner Palace Kotoyama's Call of the Night: Paradise Arc Inio Asano's Heroes Shuzo Oshimi's Sound of a Blink Paru Itagaki's Witching Hour Glitch Productions and Gooseworx & Sakura's The Amazing Digital Circus Akihisa Maki & Miki Yatsubo's Albus Changes the World Asato Shima's The Seaside Where Dragon Boys Dwell Yori Katakura's Yakuza vs. Cat Esu Omori's Shiba Inu Rooms Agatha Christie & Aya Nikaidō's And Then There Were None Renka Misaki & Yūto Suzuki's Sakamoto Days: Assassin’s Blues LN 3-in-1 edition of Mizuho Kusanagi's Yona of the Dawn Soshichi Tonari's Horror Picture Book: Looking at Me, with illustrations by Junji Ito My Hero Academia Box Set 2 Jujutsu Kaisen Complete Box Set (Will also include Volume 0) Black Torch Complete Box Set One Piece Box Set 5: Wano to Egghead My Hero Academia: Ultra Artworks art book Dorohedoro Illustrations: Mud and Sludge art book The Studio Ghibli Chronicles book 46:10: Square Enix announced they’ve licensed the following works: Natsu Hyūga, Itsuki Nanao, & Touco Shino's The Apothecary Diaries: Xiaolan's Story Gyūnyūmugigohan's Boyish Girlfriend Mugimo's My Ex-Boyfriend Loves Boys’ Love! Yuo Yodogawa's Stalker Stalks Stalker sooncha's Yang Can’t Live Alone Shinichi Fukuda & Choboraunyopomi's My Dress-Up Darling XOXO! My Dress-Up Darling Season 2 Official Anime Fanbook Asaki Asagiri & selen's The Princess Groom 47:52: SuBLime had a couple announcements during their Valentine's Day event — Puling's Sunshine in Hades, Fumi Tsuyuhisa's Robin in the Veil of Night, & Natsuki Kizu's Given 10th Mix; meanwhile Seven Seas announced this during their Citrus Con panel that happened on Sunday that they’ve licensed IROHA MEGU's WOLFHOUND and two Hayate Kuku manga (STRANGER: A WESTERN BL & MARCHEN) 48:30: Michi Masaki's Tell Me, Dear Butler, Robico’s To Dusk and Twilight, & Jun Wakatsuki's Promise Me the Spotlight is now on K MANGA; Takumigraphics, the new spinoff imprint from Fantagraphics, has licensed Gengoroh Tagame's Do You Remember the South Island’s POW Camp? which shocked Helen; Tei Monaka & Komari Kuro's All-Rounder Maid Connie Ville has been licensed by new publisher Crossed Heart; and Eke Shimamizu's The Maid I Admire Looks Good with a Cigarette is now on Manga UP!. 49:30: Manga Mirai has a couple new additions to their service; The Lady version of Cells at Work! will be put in print by Kodansha; and the two hosts discuss Glacier Bay Books taking up what Matt Haasch wanted to do with Star Fruit Books as they announced they’ll be handling the publishing line moving forward. 54:56: MediaOCD and AnimEigo announced they’ve licensed Master of Martial Hearts & Sketchbook ~full color’S~, both expected to release this year. MediaOCD also announced the new round of titles it is adding to its store as part of the Discotek Deep Dives initiative (a good amount); meanwhile Discotek has a couple re-releases upcoming this year and a Patlabor OVA coming at some point; And finally, Sentai Filmworks has licensed Heavy Metal L-Gaim, which continues Sentai just out of nowhere licensing an older work. Streaming News 1:00:02: The Madoka Magica movie has a new release date, and you can hear the two hosts editorial thought process in this moment as they decide no matter what happens — if it screens in Japan or if it doesn’t — it will wind up in weird news somehow; Akane-Banashi will have people be able to see it, but at this moment, not on your typical anime services…which will be shocking for a Shonen Jump property. 1:02:59: We have some screenings going on in the US — Anime Central will screen the original anime Goodbye, Lara in May; Next week people will be able to watch a 4K restoration of Kiki’s Delivery Service; and Demon Slayer: Infinity Castle Part 1 will also see another screening, which leads Justin to wonder when he’ll watch it since Crunchyroll doesn’t want to stream it yet! 1:04:51: The Me and Robico film has been added to Crunchyroll; Hulu and Disney+ will stream the Rooster Fighter anime in a few weeks (will first debut on Toonami); and Hulu now has the HD versions of the Pretty Cure English dub on their service. 1:05:58: The Criterion Collection’s streaming service Criterion Channel announced will add Gunbuster: The Movie and the first season of Ghost in the Shell: Stand Alone Complex sometime in March; Hideaki Sorachi’s debut one-shot manga Dandelion is getting an anime series adaptation that will stream exclusively on Netflix starting in April; and that 18+ site Oceanveil (who also sometimes streams non-18+ anime) will stream in advance the English-subtitled first episode of Do You Like Big Girls? and Marika’s Love Meter Malfunction. Weird News 1:08:23: Pokémon’s 30th Anniversary is this year, and there’s lots of things going on with the franchise…starting with the original voice of Ash Ketchum (Satoshi in Japan) doing a Let’s Play in celebration and, well, the franchise sharing their 30th Anniversary logos. All 1,025 of them! 1:10:07: Let’s just say the highs and lows of Japan are covered in this section, from a very shirtless buff man as a hanger to an AI buddharoid. 1:12:34: Two of Japan’s famous properties — Crayon Shin-chan and Sazae-san — are gonna cross over; We got a story involving recent gold medalist and Olympic star Alysa Liu and how she has a Pochita! 1:15:41: And finally, apparently Amazon really wants to be a big player in the anime destination game, which we would take seriously except there’s no real sign that they’re actually serious about it at the moment! If there’s anything you’d like to share, please feel free to reach out to us on Twitter (@TheOASG) or comment below with your thoughts! The post TheOASG Podcast Episode 238: We Talk About The Shogakukan Scandal appeared first on TheOASG.
Eric Byunn of Centana Growth joins Nick to discuss The Future of Fintech, If VC Growth Has Become a New Asset Class, and the Case For and Against Vertical Integration in the AI Age. In this episode we cover: Due Diligence and Value Creation Investment in Jumio and Identity Verification Growth Expectations and Market Realities Lessons from Netscape and Industry Evolution Investor Responsiveness and Connectivity Guest Links: Eric's LinkedIn Centana Growth Partners' LinkedIn Centana Growth Partners' Website The host of The Full Ratchet is Nick Moran of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area. We're proud to partner with Ramp, the modern finance automation platform. Book a demo and get $150—no strings attached. Want to keep up to date with The Full Ratchet? Follow us on social. You can learn more about New Stack Ventures by visiting our LinkedIn and Twitter.
-EPiC: Elvis Presley in Concert [03:11] -Scarlet [14:30] -Cleaner: Rescate vertical [23:44] -Paramount compra Warner (VI) [40_06] -Comunidad Scanners [58:17]
Vertical Drama Star Richard Sherrah shares an experience he had with Lucas films!
Happy Friday Mosquitoes & Mossies!In honour of our real life besties NEVER listening to our advice, we started this ep off with a lil bit of healing by way of giving advice to our bollywood besties who we believe had gone down the wrong path. Criminal guys and gals seem to be the flavour of the season when it comes to falling in love, and MOS does not approve. By popular demand we are back with an episode on the futility of advising your girl besties, broken female friendships, bury-the-body friends, and the myth that is a mature friendship breakup. Naina and Sakshi debate the difference between supporting your friends and enabling them, with a humble request from Sakshi to do some enabling when it comes to Meerut pronunciations. We also discussed whether our parent's generation did friendship better, Naina tried to start a pen pal community, and Sakshi requested that you remember that this is a comedy podcast - so have some pity on a bin-byaahi beti. Finally, we unpacked a few stories from MOS listeners who experienced some insane girl besties, and took a moment of silence for the mom friend of the group (hug your nearest mom friend, give them a day off, pls). Bonus: Exam szn got Sakshi feeling nostalgic and she gave us a mini-recap of a gadhyasankalan short story that has traumatized many an ICSE student.If you are a tru MOS bestie, go enable us by hitting like, share, hype, subscribe, follow, and scream it from the rooftops so that we can hit 100k. Chaptering:00:00 – Introduction: Should men start in jail?00:51 – Female friendships, because you asked for it02:33 – What advice do you give Bollywood characters as their BFF?04:28 – A friendly shoutout to Snooze Club06:00 – Onscreen kissing scenes & accidentally witnessing a friend's PDA08:41 – Dating your therapist? Let's discuss10:30 – Why women should have male friends (in moderation)13:29 – Asking your male BFF for help… oh no, not again16:47 – Phrase of the week17:27 – Comment the name of your BFF who can do anything for you20:37 – Friendship breakups that hurt more than divorce22:10 – Workplace & school friendships26:58 – Does friendship end with a breakup or just fizzles out?33:12 – Reviewing O Romeo37:23 – Digital detox for an influencer? Not happening39:02 – Bonding only with like-minded people?43:18 – Brace ourselves for a Hindi monologue52:00 – This story deserves a reaction58:27 – A heart-touching story01:06:18 – Where do you actually meet people? Suggestions inside01:09:37 – Like, share, follow, subscribe, hype us up & help us reach 100KFollow MoS on Instagram:https://www.instagram.com/momentofsilencepod/reels/?hl=enCredits: Naina Bhan - Co-host and certified overthinkerhttps://www.instagram.com/nainabee?ig...Sakshi Shivdasani - Co-host, balancing out Naina's overthinking with a healthy dose of not thinkinghttps://www.instagram.com/sakshishivdasani/?hl=enSenior Producer- Amruta P. https://www.linkedin.com/in/amruta-bandivdekar-01879925Produced by "Vertical by Handmade" - Our personal cheering squad https://www.instagram.com/verticalbyhandmade?igsh=NG1vdXd5bWdsdWI3Creative direction by Tinkre, Keeper of MoS' signature “Pookie” energy Natascha Mehrahttps://www.instagram.com/tinkre.in/?hl=enhttps://www.instagram.com/natascha.zip/?hl=en Researched by our very own curiosity engineer - Aashna Sharma https://www.linkedin.com/in/aashna-sharma-913146179Reel Editor - Yug Vermahttps://www.instagram.com/bass_abhiyug?igsh=MnlibHdsbG56MjNl&utm_source=qrDisclaimer: The views and opinions expressed on this podcast are for entertaining purposes only and do not necessarily reflect those of the hosts, the production team, or affiliated brand. We don't claim to be experts- just two people with Wi-fi and feelings. While we encourage open dialogue, we do not guarantee the accuracy, completeness, or reliability of any information shared. Listener discretion is advised — especially if you're allergic to strong opinions.
⭐️⭐️برای شنیدن دنتکست ۱۵۵ در سایت رسمی اینجا کلیک کنید⭐️⭐️❌❌❌ در این قسمت ادامه صحبتهای دنکست قبلی که در مورد VD و باورهای رایج مربوط به اون بود رو ادامه میدیم و صحبتهامون به اتمام میرسهاین آخرین دنتکست سال ۱۴۰۴ هستش
This episode opens with an improvised serialized scene called “The Rusted Lantern” — a short noir novella-style reading performed live — then unfolds into a wide-ranging, candid conversation about auditions, writing, producing indie films, creative burnout, social-media monetization, sobriety, pacing your career, and practical tips for makers trying to get work done with limited time and money. The trio (Jen Bartels, Caitlin Brodnick, Isaac Abrams) balance a playful creative exercise with honest, useful career talk for actors, writers, creators and anyone making art in the modern media ecosystem.Episode highlights and expanded description- Live novella performance: Isaac prompts an AI-style 90-second soap-opera novella; the group performs multiple takes of a moody scene set in a corner table at “The Rustic Lantern.” - Performance craft & acting advice: After the reading the hosts debrief on cold reads, self-tape auditions, the tension between following explicit direction vs. owning the moment, and strategies for staying present in auditions. - Writing & making your own work: The hosts discuss how to move from performer to creator — startup routines for writing a script or short, how to attack a seemingly overwhelming feature project (write the small scene you can't stop thinking about), using collaboration and iterative drafts, and practical tools (writer-duet workflows, co-writing in a room, and the value of deadlines).- Indie filmmaking realities: Low-budget production advice — how to make content when money is scarce- Encouraging closing: the hosts emphasize longevity in creative careers — the importance of craft, tenacity, and staying connected to why you started. They invite listeners to submit novella prompts, short scenes, and theme-song ideas for future episodes.video chapters00:00 — Opening banter & show settling (names, tone) 02:50 — Novella setup: prompt, characters, and format explained 03:50 — First full read: “The Rusted Lantern” — take one (moody intro) 06:15 — Key reveal: leather case, Cassandra Hale photograph, stakes established 07:10 — Cliffhanger note & “To be continued” title card 07:40 — Take two: refined performance, additional screen-direction beats 11:10 — Performance debrief: cold reads vs acting-from-truth, practical audition tactics 13:30 — Audition horror stories and director/room etiquette (what to expect) 16:00 — Writing advice: micro-goals, “write the scene you can't stop thinking about” technique 18:10 — Indie film logistics: crew, budget tiers, attaching names & fundraising realities 21:30 — Monetization talk: social clout, viral jingles, content reuse issues and legal basics 24:15 — Vertical platforms vs long-form: pros, cons, and creative strategies 27:40 — Personal check-in: sobriety, routines, naps, and creative energy management 31:05 — Genre tastes, movie talk, and quick career anecdotes (commercials, background work) 34:20 — Creative collaboration: building teams, finding faithful collaborators vs “traders” 37:50 — Tools & workflow: writing software, co-write sessions, timeline tips for busy creators 40:30 — Closing: production ideas, call for submissions (theme songs, novella prompts), final banter#Podcast #Improv #Screenwriting #IndieFilmmaking #AuditionTips #VerticalVideo #CreativeBurnout #Sobriety #ContentCreationClosing note and inviteWe close by inviting listeners to submit short novellas, two-line scene prompts, theme-song demos, or project ideas — whether you're an actor, writer, director or first-time creator. We'll sample listener submissions, read prompts on air, and possibly develop serialized shorts based on the best seeds. If you want to contribute, email goodtoseeyoupodcast@gmail.com or DM the show on Instagram with your clip or idea.Thanks for listening — if this episode sparked even one idea or made you feel less alone in the hustle, subscribe and drop a rating.
In this episode of Best in Fest, host Leslie LaPage sits down with Maegan La'Trese Fillmore — director, producer, activist, and founder of Hudson Fillmore — for a no-nonsense conversation about the real economics of independent filmmaking today.Maegan shares her journey from overseeing productions at Comedy Central, VH1, MTV, the NFL, YouTube Originals, and Paramount to directing award-winning indie projects like Soul Tie, and why she ultimately chose to build outside the studio system.In this episode, we break down:
Are you wondering if multifamily real estate is still a good investment in 2026? In this episode, Cameron Christiansen and Anthony Faso welcome Robert Pereira, founder and CEO of ARC Multifamily Group. Robert, who started his real estate journey during the 2008 downturn, has grown ARC into a successful multifamily operator with over 3,500 units. He explains why multifamily investments are still attractive despite challenges like inflation, increased construction costs, and rising insurance premiums. With over 20 years of experience, Robert discusses how the fundamentals of multifamily are back on track and why now is a great time for long-term investors. He shares his philosophy on ensuring investor protection, which includes clear business plans, strong communication, and a focus on returning capital. Robert also highlights how markets that were oversupplied a few years ago are now seeing positive leverage opportunities. Tune in for valuable insights on real estate, investment strategies, and what to look for when choosing a multifamily operator. In This Episode: - Why multifamily investors feel let down - The impact of rent growth and inflation in real estate - What to be aware of when investing in multifamily - Why multifamily remains a strong investment in 2026 - The lessons learned from the 2017-2021 real estate boom - Vertical integration: Why it's key for multifamily success - Investor protection and growth during tough times - How ARC evaluates deals in the current market - Debt funds vs. equity deals: The right investment strategy - How ARC protects investors - Multifamily investment limitations Resources:
Is your steak a byproduct of a corrupt financial ledger? Texas Slim (@modernTman) explains how food centralization serves as currency debasement. We discuss the 1971 "Big Fat Lie" and how ending the gold standard led to declining nutritional integrity via subsidized grains. Slim argues the health of our children is proof of work, noting the current legacy system is failing.Modern cattle ranching is a struggle against corporate cartels. For years, the industry has prioritized inflationary weight gain over biological vitality. Slim describes the transition from forage-based systems to scientific manipulation. This centralization has hurt independent ranchers through regulatory capture and debt traps.El Salvador is now a hub for regenerative agriculture and food security. Slim is moving away from Angus beef marketing myths to launch heritage breed programs designed for local microbiomes. Rather than a one-size-fits-all approach with Brahman cattle, he is building a sovereign food system. He believes fixing the money is the first step toward fixing the food.Vertical integration allows producers to remove parasitic middlemen. The Beef Initiative develops decentralized micro-processing to return power to ranchers. By owning the value chain from the water table to the fork, producers can move away from the industrial machine.The acquisition of beef.com represents a change. It acts as the digital backbone for a global movement connecting producers and consumers via a Bitcoin standard. This infrastructure ensures the narrative remains with land stewards. The goal is to build a future based on hard assets.—Bitcoin Beach TeamConnect and Learn more about Texas SlimX: Main: https://x.com/modernTmanX: Movement: https://x.com/@beefinitiativeX: Media: https://x.com/@TexasSlimsCutsIG: https://www.instagram.com/iamtexasslim/IG: https://www.instagram.com/texasslimscuts/YT: https://www.youtube.com/@iamtexasslimWeb: https://harvestofdeception.substack.com/Web: https://beef.comWeb: https://beefinitiative.com/Web: https://beefnews.org/Web: https://beefmaps.com/ Support and follow Bitcoin Beach:X: https://www.twitter.com/BitcoinBeach IG: https://www.instagram.com/bitcoinbeach_sv TikTok: https://www.tiktok.com/@livefrombitcoinbeach Web: https://www.bitcoinbeach.com Browse through this quick guide to learn more about the episode:00:00 Intro05:42 Why the 1971 money shift ruined our food11:08 How to exit the corporate meat monopoly18:16 Why El Salvador is the hub for food security22:49 How to build a sovereign cattle program24:13 How decentralized processing kills the food cartel31:59 Fixing food economics: Price per acre vs. pound37:05 Mining volcanic soil for high-density protein51:00 How Beef.com disrupts global middlemen1:06:01 Protecting your wealth with hard assetsLive From Bitcoin Beach
In this episode of The Distribution, Brandon Sedloff sits down with Steven DeFrancis, Founder and CEO of Cortland, to unpack how multifamily evolved from a commodity product into a true consumer service business. Steven shares the story behind Cortland's transformation from a small merchant builder into a vertically integrated investment manager with more than 75,000 units and $20 billion in gross asset value. The conversation explores why operational depth, brand trust, and technology infrastructure now sit at the center of performance in living real estate. Steven walks through the post-GFC research that reshaped Cortland's strategy, the demographic shifts that extended renter lifecycles, and the deliberate decision to build operational infrastructure long before raising institutional LP capital. He also details how brand equity translates directly into pricing power, retention, and investor returns, and why scale is increasingly essential in a consolidating market. They discuss: The pivot from merchant development to a vertically integrated operating platform Why multifamily shifted from a commodity to a consumer service business How brand trust creates measurable top-line rent premiums and longer resident tenure The role of data, AI, and centralized workflows in reducing fraud, speeding leasing, and improving performance Why 2026 and beyond may present compelling acquisition opportunities amid capital market stress and supply overhang Links: Cortland - https://cortland.com/ Steven on LinkedIn - https://www.linkedin.com/in/steven-defrancis-022a564/ Brandon on LinkedIn - https://www.linkedin.com/in/bsedloff/ Juniper Square - https://www.junipersquare.com/ Topics: (00:00:00) - Intro (00:03:21) - Steven's background and career (00:13:48) - Building Cortland and lessons from the GFC (00:20:06) - Building a vertically integrated operating platform (00:24:13) - Raising institutional LP funds (00:28:02) - Cortland's scale, markets, and fund vehicles (00:34:22) - Operational alpha (00:42:20) - 2026 market outlook (00:50:40) - Tech and AI in multifamily (00:55:28) - Advice for operators (01:00:11) - Closing thoughts
Episode Synopsis: Today, we are talking about the new Ashley Valley Gorge Via Ferrata in Vernal, Utah. Via ferrata climbing is a type of mountain climbing that uses fixed cables, ladders, and metal rungs attached to the rock to help climbers safely traverse steep terrain. We get the perspective of Clint Cook, The CEO of Via Ferrata Solutions, and Mike Cook, the Uintah County Trails Manager, and hear what it took to build the nation's longest Via Ferrata. This new epic outdoor adventure was just built in Vernal, Utah, and it's already bringing visitors from around the world. Watch the Documentary Series Watch this episode of Small Town Comeback, an original documentary series, at www.smalltowncomeback.org Show Notes: Visit the town in Vernal, Utah: dinoland.com Sponsors This episode is brought to you by: Uintah County Travel and Tourism Uintah County Economic Development Vernal City Credits: This show is produced by Summer Creative Agency and V6 Media. Host: Becca Summers Audio Engineer: Coby Coonradt Assistant Producer: Eden Bostrom
Chris Holman welcomes George Cook, VP of Sales and Marketing for TARUS, Sterling Heights, MI. Welcome George, please tell us about TARUS? As a finalist for the 2025 Manufacturing Innovation Excellence Award, from the MMA, what does this recognition say about the future role of vertically integrated technology companies like TARUS in shaping the next generation of manufacturing operations? From a business strategy standpoint, what drove the decision to apply an innovation mindset to developing an in-house ERP platform like VERAX? VERAX is described as “created by manufacturers for manufacturers.” How does that translate into measurable business outcomes—such as cost control, throughput, or decision-making—compared to traditional, off-the-shelf ERP systems? Industry 4.0 capabilities like real-time machine monitoring, biometrics, and geolocation are built into VERAX. How are manufacturers using these tools today to improve productivity and competitiveness in an increasingly data-driven environment? » Visit MBN website: www.michiganbusinessnetwork.com/ » Subscribe to MBN's YouTube: www.youtube.com/@MichiganbusinessnetworkMBN » Like MBN: www.facebook.com/mibiznetwork » Follow MBN: twitter.com/MIBizNetwork/ » MBN Instagram: www.instagram.com/mibiznetwork/ Sterling Heights based VERAX ERP Selected as Finalist for Manufacturing Excellence Award STERLING HEIGHTS — Sterling Heights' own VERAX ERP is receiving statewide recognition as a finalist for the 2025 Innovation Excellence Award. The honor is part of the Manufacturing Excellence Awards, presented annually by the Michigan Manufacturers Association (MMA). MMA will reveal and honor the winners of the 2025 Manufacturing Excellence Awards during a celebration on Thursday, Nov. 20, 2025, in Lansing. VERAX ERP was selected as a finalist for the 2025 Innovation Excellence Award due to its dedication and expertise in the industry. VERAX ERP is one of the very few pure-play software products wholly produced in the State of Michigan that services the complex needs of manufacturing companies statewide. The Manufacturing Excellence Awards is the annual statewide celebration of the exceptional contributions that Michigan manufacturers make to their workforce, their communities, the economy and the industry. The program promotes the inspiring stories of Michigan's manufacturing industry, the thousands of unique manufacturing companies across the state, the hundreds of thousands of Michiganders employed in the industry and the local communities that support it. Starting out of a garage in Warren, Michigan in 1969, TARUS manufactures a variety of machine tools for heavy industry, including large volume, high-precision 5-axis CNC machines, gundrill and deephole drilling machines for nuclear power, coordinate measuring machines, and was the inventor of the Claymill. The Claymill revolutionized car and transportation design worldwide and TARUS remains the preeminent global leader. Key to TARUS' success since its founding is its belief in total vertical integration. In the late 1970s, this philosophy meant TARUS created its own CNC control for the machines it built. It laid the foundation of software development dating back almost 50 years. For more than 120 years, MMA has served as a unifying champion of an industry that is in constant evolution and growth. They represent the most diverse manufacturing center in perhaps the entire world and, just as they have since the industrial revolution, Michigan will continue to be the cradle of innovation and invention for generations to come. MMA's sole purpose is to advocate for, support, train and grow the manufacturing industry in Michigan. Learn more about MMA and the 2025 Manufacturing Excellence Awards at mimfg.org/excellence.
Jordan Crawford explains the Permissionless Value Prop, a way of combining internal and external data to create outreach that earns attention.- Why most AI SDR tools produce identical messages- The limits of firmographic ICPs- How to define a “paying qualified segment”- Vertical vs. horizontal GTM trade-offs- Where RevOps should start with AI
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
WWJ auto analyst John McElroy reports Lear Corporation found a lot of savings by moving to vertical integration and they are now the biggest American automotive supplier.
January 26, 2026: Air Taxis and Vertical AerospaceKeep your eyes on the skies, because electric vertical takeoff and landing aircraft are about to be taking off.Not quite an airplane and not quite a helicopter, an eVTOL is perhaps best described as a piloted drone that carry passengers. They ascend straight up during takeoff, are quieter than a refrigerator, give off zero emissions, and can reach top speeds of 200 miles per hour.eVTOLs are a unique new form of transportation that could reduce traffic congestion in densely-populated areas and are gaining regulatory clearances in both the US and abroad. Several companies are already conducting pilot programs that have been partially-funded by airlines and automakers.On Monday's show, MyWallSt's founder Emmet Savage and I discuss how this new industry is reaching a higher altitude.Our stock of focus was Vertical Aerospace (NYSE: EVTL) a fascinating innovator who's also much less inexpensive than its other eVTOL peers.⚠️ Not financial advice. Do your own research before investing.#evtol #watchlist #stockpicks #dividends #chipstocks #marketing #7investing #investing2026 #techinvesting
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
⭐️⭐️برای شنیدن دنتکست ۱۵۴ در سایت رسمی اینجا کلیک کنید⭐️⭐️❌❌❌در این اپیزود به یکی از بحثبرانگیزترین موضوعات در بازسازیهای پروتزی میپردازیم:تغییر Vertical Dimension of Occlusion (VDO) و باورهای رایج پیرامون آن.این قسمت بر اساس یک مقالهی جدید علمی تهیه شده و به بررسی انتقادی برخی تصورات تثبیتشده در مورد افزایش VDO میپردازد.
Investor Fuel Real Estate Investing Mastermind - Audio Version
In this episode of the Real Estate Pros podcast, host Michelle Kesil speaks with LaVonne Idlette, who runs a vertically integrated real estate investment firm in Florida. LaVonne shares her journey in the real estate industry, discussing the challenges and successes of building a business that focuses on lending, development, and affordable housing. She emphasizes the importance of belief, adaptability, and networking in achieving business growth and overcoming obstacles. The conversation also touches on the impact of the pandemic on business operations and the significance of community development. Professional Real Estate Investors - How we can help you: Investor Fuel Mastermind: Learn more about the Investor Fuel Mastermind, including 100% deal financing, massive discounts from vendors and sponsors you're already using, our world class community of over 150 members, and SO much more here: http://www.investorfuel.com/apply Investor Machine Marketing Partnership: Are you looking for consistent, high quality lead generation? Investor Machine is America's #1 lead generation service professional investors. Investor Machine provides true 'white glove' support to help you build the perfect marketing plan, then we'll execute it for you…talking and working together on an ongoing basis to help you hit YOUR goals! Learn more here: http://www.investormachine.com Coaching with Mike Hambright: Interested in 1 on 1 coaching with Mike Hambright? Mike coaches entrepreneurs looking to level up, build coaching or service based businesses (Mike runs multiple 7 and 8 figure a year businesses), building a coaching program and more. Learn more here: https://investorfuel.com/coachingwithmike Attend a Vacation/Mastermind Retreat with Mike Hambright: Interested in joining a "mini-mastermind" with Mike and his private clients on an upcoming "Retreat", either at locations like Cabo San Lucas, Napa, Park City ski trip, Yellowstone, or even at Mike's East Texas "Big H Ranch"? Learn more here: http://www.investorfuel.com/retreat Property Insurance: Join the largest and most investor friendly property insurance provider in 2 minutes. Free to join, and insure all your flips and rentals within minutes! There is NO easier insurance provider on the planet (turn insurance on or off in 1 minute without talking to anyone!), and there's no 15-30% agent mark up through this platform! Register here: https://myinvestorinsurance.com/ New Real Estate Investors - How we can work together: Investor Fuel Club (Coaching and Deal Partner Community): Looking to kickstart your real estate investing career? Join our one of a kind Coaching Community, Investor Fuel Club, where you'll get trained by some of the best real estate investors in America, and partner with them on deals! You don't need $ for deals…we'll partner with you and hold your hand along the way! Learn More here: http://www.investorfuel.com/club —--------------------
If you're doing more but feeling stuck, the issue may not be effort—it may be the direction of your growth. In this episode of Healthy Mind, Healthy Life, host Sayan explores why high achievers plateau even as they collect more skills, goals, and credentials. Joined by Ryan Gottfredson, the conversation breaks down vertical development—upgrading your internal “operating system” (nervous system and identity)—so you can lead, decide, and perform with less strain and more impact. This is for professionals and leaders who want real progress without living in constant pressure. About the Guest: Ryan Gottfredson is a leadership coach and researcher focused on vertical development and mindsets. He's the author of Becoming Better: The Groundbreaking Science of Personal Transformation and shares practical tools like meditation and journaling. Key Takeaways: Separate “doing side” growth (skills) from “being side” growth (identity + nervous system). Use the tool-belt/iPad metaphor: add tools vs upgrade the operating system. Notice 4th-gear living: fast pace, high internal RPMs, higher burnout risk. “Shift gears” by letting go of the need to prove, be recognized, or never fail. If you avoid initiative due to fear of failure, more credentials won't fix it—inner work will. Start simple: meditation for regulation and a daily journaling habit for self-awareness. How to Connect With the Guest: https://ryangottfredson.com/ Want to be a guest on Healthy Mind, Healthy Life? DM on PM - Send me a message on PodMatch DM Me Here: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty, storyteller, survivor, and wellness advocate. With over 6000+ episodes and 200K+ global listeners, we unite voices, break stigma, and build a world where every story matters.
TOPIC: Lear Corp PANEL: Ray Scott, Lear Corporation; David Welch, Bloomberg; Gary Vasilash, shinymetalboxes.net; John McElroy, Autoline.tv
We've got a super cool episode lined up this time! We've got three amazing guests joining us. Luc Besson is the director of The Fifth Element, Lucy, and Léon: The Professional. We're diving deep into his latest movie, Dracula. We discussed the incredible artistry behind the movie, its captivating story, and the memorable characters they created. After that, we're going straight into the interview with Caleb Landry Jones and Zoë Bleu! We discuss the power of love, their experience filming the movie, and the story's deeper meaning. Buckle up for this great conversation and make sure to check out Dracula this Friday! Vertical will release DRACULA in theaters nationwide on February 6th, 2026 Learn more about your ad choices. Visit megaphone.fm/adchoices