POPULARITY
Categories
⭐️⭐️برای شنیدن دنتکست ۱۵۵ در سایت رسمی اینجا کلیک کنید⭐️⭐️❌❌❌ در این قسمت ادامه صحبتهای دنکست قبلی که در مورد VD و باورهای رایج مربوط به اون بود رو ادامه میدیم و صحبتهامون به اتمام میرسهاین آخرین دنتکست سال ۱۴۰۴ هستش
In this episode of Best in Fest, host Leslie LaPage sits down with Maegan La'Trese Fillmore — director, producer, activist, and founder of Hudson Fillmore — for a no-nonsense conversation about the real economics of independent filmmaking today.Maegan shares her journey from overseeing productions at Comedy Central, VH1, MTV, the NFL, YouTube Originals, and Paramount to directing award-winning indie projects like Soul Tie, and why she ultimately chose to build outside the studio system.In this episode, we break down:
Are you wondering if multifamily real estate is still a good investment in 2026? In this episode, Cameron Christiansen and Anthony Faso welcome Robert Pereira, founder and CEO of ARC Multifamily Group. Robert, who started his real estate journey during the 2008 downturn, has grown ARC into a successful multifamily operator with over 3,500 units. He explains why multifamily investments are still attractive despite challenges like inflation, increased construction costs, and rising insurance premiums. With over 20 years of experience, Robert discusses how the fundamentals of multifamily are back on track and why now is a great time for long-term investors. He shares his philosophy on ensuring investor protection, which includes clear business plans, strong communication, and a focus on returning capital. Robert also highlights how markets that were oversupplied a few years ago are now seeing positive leverage opportunities. Tune in for valuable insights on real estate, investment strategies, and what to look for when choosing a multifamily operator. In This Episode: - Why multifamily investors feel let down - The impact of rent growth and inflation in real estate - What to be aware of when investing in multifamily - Why multifamily remains a strong investment in 2026 - The lessons learned from the 2017-2021 real estate boom - Vertical integration: Why it's key for multifamily success - Investor protection and growth during tough times - How ARC evaluates deals in the current market - Debt funds vs. equity deals: The right investment strategy - How ARC protects investors - Multifamily investment limitations Resources:
Is your steak a byproduct of a corrupt financial ledger? Texas Slim (@modernTman) explains how food centralization serves as currency debasement. We discuss the 1971 "Big Fat Lie" and how ending the gold standard led to declining nutritional integrity via subsidized grains. Slim argues the health of our children is proof of work, noting the current legacy system is failing.Modern cattle ranching is a struggle against corporate cartels. For years, the industry has prioritized inflationary weight gain over biological vitality. Slim describes the transition from forage-based systems to scientific manipulation. This centralization has hurt independent ranchers through regulatory capture and debt traps.El Salvador is now a hub for regenerative agriculture and food security. Slim is moving away from Angus beef marketing myths to launch heritage breed programs designed for local microbiomes. Rather than a one-size-fits-all approach with Brahman cattle, he is building a sovereign food system. He believes fixing the money is the first step toward fixing the food.Vertical integration allows producers to remove parasitic middlemen. The Beef Initiative develops decentralized micro-processing to return power to ranchers. By owning the value chain from the water table to the fork, producers can move away from the industrial machine.The acquisition of beef.com represents a change. It acts as the digital backbone for a global movement connecting producers and consumers via a Bitcoin standard. This infrastructure ensures the narrative remains with land stewards. The goal is to build a future based on hard assets.—Bitcoin Beach TeamConnect and Learn more about Texas SlimX: Main: https://x.com/modernTmanX: Movement: https://x.com/@beefinitiativeX: Media: https://x.com/@TexasSlimsCutsIG: https://www.instagram.com/iamtexasslim/IG: https://www.instagram.com/texasslimscuts/YT: https://www.youtube.com/@iamtexasslimWeb: https://harvestofdeception.substack.com/Web: https://beef.comWeb: https://beefinitiative.com/Web: https://beefnews.org/Web: https://beefmaps.com/ Support and follow Bitcoin Beach:X: https://www.twitter.com/BitcoinBeach IG: https://www.instagram.com/bitcoinbeach_sv TikTok: https://www.tiktok.com/@livefrombitcoinbeach Web: https://www.bitcoinbeach.com Browse through this quick guide to learn more about the episode:00:00 Intro05:42 Why the 1971 money shift ruined our food11:08 How to exit the corporate meat monopoly18:16 Why El Salvador is the hub for food security22:49 How to build a sovereign cattle program24:13 How decentralized processing kills the food cartel31:59 Fixing food economics: Price per acre vs. pound37:05 Mining volcanic soil for high-density protein51:00 How Beef.com disrupts global middlemen1:06:01 Protecting your wealth with hard assetsLive From Bitcoin Beach
In this episode of The Distribution, Brandon Sedloff sits down with Steven DeFrancis, Founder and CEO of Cortland, to unpack how multifamily evolved from a commodity product into a true consumer service business. Steven shares the story behind Cortland's transformation from a small merchant builder into a vertically integrated investment manager with more than 75,000 units and $20 billion in gross asset value. The conversation explores why operational depth, brand trust, and technology infrastructure now sit at the center of performance in living real estate. Steven walks through the post-GFC research that reshaped Cortland's strategy, the demographic shifts that extended renter lifecycles, and the deliberate decision to build operational infrastructure long before raising institutional LP capital. He also details how brand equity translates directly into pricing power, retention, and investor returns, and why scale is increasingly essential in a consolidating market. They discuss: The pivot from merchant development to a vertically integrated operating platform Why multifamily shifted from a commodity to a consumer service business How brand trust creates measurable top-line rent premiums and longer resident tenure The role of data, AI, and centralized workflows in reducing fraud, speeding leasing, and improving performance Why 2026 and beyond may present compelling acquisition opportunities amid capital market stress and supply overhang Links: Cortland - https://cortland.com/ Steven on LinkedIn - https://www.linkedin.com/in/steven-defrancis-022a564/ Brandon on LinkedIn - https://www.linkedin.com/in/bsedloff/ Juniper Square - https://www.junipersquare.com/ Topics: (00:00:00) - Intro (00:03:21) - Steven's background and career (00:13:48) - Building Cortland and lessons from the GFC (00:20:06) - Building a vertically integrated operating platform (00:24:13) - Raising institutional LP funds (00:28:02) - Cortland's scale, markets, and fund vehicles (00:34:22) - Operational alpha (00:42:20) - 2026 market outlook (00:50:40) - Tech and AI in multifamily (00:55:28) - Advice for operators (01:00:11) - Closing thoughts
Episode Synopsis: Today, we are talking about the new Ashley Valley Gorge Via Ferrata in Vernal, Utah. Via ferrata climbing is a type of mountain climbing that uses fixed cables, ladders, and metal rungs attached to the rock to help climbers safely traverse steep terrain. We get the perspective of Clint Cook, The CEO of Via Ferrata Solutions, and Mike Cook, the Uintah County Trails Manager, and hear what it took to build the nation's longest Via Ferrata. This new epic outdoor adventure was just built in Vernal, Utah, and it's already bringing visitors from around the world. Watch the Documentary Series Watch this episode of Small Town Comeback, an original documentary series, at www.smalltowncomeback.org Show Notes: Visit the town in Vernal, Utah: dinoland.com Sponsors This episode is brought to you by: Uintah County Travel and Tourism Uintah County Economic Development Vernal City Credits: This show is produced by Summer Creative Agency and V6 Media. Host: Becca Summers Audio Engineer: Coby Coonradt Assistant Producer: Eden Bostrom
Chris Holman welcomes George Cook, VP of Sales and Marketing for TARUS, Sterling Heights, MI. Welcome George, please tell us about TARUS? As a finalist for the 2025 Manufacturing Innovation Excellence Award, from the MMA, what does this recognition say about the future role of vertically integrated technology companies like TARUS in shaping the next generation of manufacturing operations? From a business strategy standpoint, what drove the decision to apply an innovation mindset to developing an in-house ERP platform like VERAX? VERAX is described as “created by manufacturers for manufacturers.” How does that translate into measurable business outcomes—such as cost control, throughput, or decision-making—compared to traditional, off-the-shelf ERP systems? Industry 4.0 capabilities like real-time machine monitoring, biometrics, and geolocation are built into VERAX. How are manufacturers using these tools today to improve productivity and competitiveness in an increasingly data-driven environment? » Visit MBN website: www.michiganbusinessnetwork.com/ » Subscribe to MBN's YouTube: www.youtube.com/@MichiganbusinessnetworkMBN » Like MBN: www.facebook.com/mibiznetwork » Follow MBN: twitter.com/MIBizNetwork/ » MBN Instagram: www.instagram.com/mibiznetwork/ Sterling Heights based VERAX ERP Selected as Finalist for Manufacturing Excellence Award STERLING HEIGHTS — Sterling Heights' own VERAX ERP is receiving statewide recognition as a finalist for the 2025 Innovation Excellence Award. The honor is part of the Manufacturing Excellence Awards, presented annually by the Michigan Manufacturers Association (MMA). MMA will reveal and honor the winners of the 2025 Manufacturing Excellence Awards during a celebration on Thursday, Nov. 20, 2025, in Lansing. VERAX ERP was selected as a finalist for the 2025 Innovation Excellence Award due to its dedication and expertise in the industry. VERAX ERP is one of the very few pure-play software products wholly produced in the State of Michigan that services the complex needs of manufacturing companies statewide. The Manufacturing Excellence Awards is the annual statewide celebration of the exceptional contributions that Michigan manufacturers make to their workforce, their communities, the economy and the industry. The program promotes the inspiring stories of Michigan's manufacturing industry, the thousands of unique manufacturing companies across the state, the hundreds of thousands of Michiganders employed in the industry and the local communities that support it. Starting out of a garage in Warren, Michigan in 1969, TARUS manufactures a variety of machine tools for heavy industry, including large volume, high-precision 5-axis CNC machines, gundrill and deephole drilling machines for nuclear power, coordinate measuring machines, and was the inventor of the Claymill. The Claymill revolutionized car and transportation design worldwide and TARUS remains the preeminent global leader. Key to TARUS' success since its founding is its belief in total vertical integration. In the late 1970s, this philosophy meant TARUS created its own CNC control for the machines it built. It laid the foundation of software development dating back almost 50 years. For more than 120 years, MMA has served as a unifying champion of an industry that is in constant evolution and growth. They represent the most diverse manufacturing center in perhaps the entire world and, just as they have since the industrial revolution, Michigan will continue to be the cradle of innovation and invention for generations to come. MMA's sole purpose is to advocate for, support, train and grow the manufacturing industry in Michigan. Learn more about MMA and the 2025 Manufacturing Excellence Awards at mimfg.org/excellence.
Jordan Crawford explains the Permissionless Value Prop, a way of combining internal and external data to create outreach that earns attention.- Why most AI SDR tools produce identical messages- The limits of firmographic ICPs- How to define a “paying qualified segment”- Vertical vs. horizontal GTM trade-offs- Where RevOps should start with AI
Hiii Mossies & Mosquitoes, welcome back to another episode of Moment of Silence. This week we took a moment to marvel at how lucky we are to be living in a place that is the very epitome of urban planning with smooth roads, bountiful parks, perfect air and the MOST comfortable public transport system. Is there even a place that could compete? We think not. That's right, this is an episode on all that Mumbai offers us. We talk about Indians and our civic sense, the ever-concerning AQI, construction noise, and Naina's determined quest to see a butterfly in the city. We also dive into Tu Ya Main's crocodile-infested scuba pool and discuss Shanaya Kapoor's flawed understanding of the influencer life. Sakshi takes us on a detour with a breakdown of her trip to Sri Lanka and then we make a small game of choosing between bad public facilities and particularly bad boyfriends. There's also a bit about plurals, or as Sakshi would like me to say plurii - in there, that you should not miss. Anyway, like, comment & don't stand under any bridges. Hit subscribe if you haven't already. Help us reach 100k before the Metro reaches completion, come on. 00:00 – Introduction00:46 – You know the topic01:43 – Signs you are breaking civic sense04:12 – Ouch! The Wi-Fi cable gets choppy choppy04:36 – Voting for MOS billboards on Bandra–Worli Sea Link05:46 – Mumbai has a forest, discovered on Valentine's Day11:07 – A metro mishap unfolds12:29 – A quick review with spoiler alert: Tu Yaa Main14:46 – A brand-new collaboration is unveiled16:32 – The toughest part of being an influencer17:31 – Shoutout to Collipop: sugar-free collagen lollipops18:32 – Falling in love for two years now: Dheere hota hai pyaar20:50 – Men and civic sense21:45 – Funny ways to stop people from peeing in public places22:53 – What happens on Valentine's Day25:08 – What happens in Sri Lanka doesn't stay in Sri Lanka27:00 – Would you rather? The answers get interesting31:50 – The theory of not going to bed angry41:17 – Men and commitment issues43:25 – Kartik Aaryan's PR strategy (we are not part of it)47:30 – Niche civic sense also exists48:40 – Calling on WhatsApp when your number isn't saved50:00 – Not sharing OTP: bad civic sense?51:47 – Pool and pee54:10 – Some statistics we just discover56:42 – Like, subscribe, share, comment and help us reach 100KBrutally Honest Creators - https://youtube.com/playlist?list=PLHkcqImp8gcbZHzn1secwSYYKG8dds437&si=wYCafRcBIKDy0BDCComedians Unfiltered- https://youtube.com/playlist?list=PLHkcqImp8gcabWOmtiYQUUXGU4ptrq9HB&si=sWm2ep8LZr8GU_7cFollow MoS on Instagram:https://www.instagram.com/momentofsilencepod/reels/?hl=enCredits: Naina Bhan - Co-host and certified overthinkerhttps://www.instagram.com/nainabee?ig...Sakshi Shivdasani - Co-host, balancing out Naina's overthinking with a healthy dose of not thinkinghttps://www.instagram.com/sakshishivdasani/?hl=enSenior Producer- Amruta P. https://www.linkedin.com/in/amruta-bandivdekar-01879925Produced by "Vertical by Handmade" - Our personal cheering squad https://www.instagram.com/thehandmadeproductions/Creative direction by Tinkre, Keeper of MoS' signature “Pookie” energy Natascha Mehrahttps://www.instagram.com/tinkre.in/?hl=enhttps://www.instagram.com/natascha.zip/?hl=en Researched by our very own curiosity engineer - Aashna Sharmahttps://www.linkedin.com/in/aashna-sharma-913146179Reel Editor - Yug Vermahttps://www.instagram.com/bass_abhiyug?igsh=MnlibHdsbG56MjNl&utm_source=qrDisclaimer: The views and opinions expressed on this podcast are for entertaining purposes only and do not necessarily reflect those of the hosts, the production team, or affiliated brand. We don't claim to be experts- just two people with Wi-fi and feelings. While we encourage open dialogue, we do not guarantee the accuracy, completeness, or reliability of any information shared. Listener discretion is advised — especially if you're allergic to strong opinions.
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
Disney+ Launches First Vertical Series “Locker Diaries” https://whatsondisneyplus.com/disney-launches-first-vertical-series-locker-diaries/ #DisneyPlus VISIT ONLINE - http://www.WhatsOnDisneyPlus.com If you enjoy our content, please consider supporting it via our Patreon or as a YouTube Channel Membership from as little as $2 a month and get access to exclusive content and much more.
WWJ auto analyst John McElroy reports Lear Corporation found a lot of savings by moving to vertical integration and they are now the biggest American automotive supplier.
The Gold Coast is entering a new era of vertical living. Luxury apartment developments are rising across the coastline and the way buyers approach these projects needs to evolve.In this episode of Buying Gold Coast, I am joined by Carly Cottam from Motiv to unpack the Gold Coast vertical living boom and what buyers need to understand in 2026. We break down how developers plan and stage new projects, how pricing and release strategies work, what drives demand in high-rise living, and the key risks and opportunities buyers should consider before committing to off-the-plan or apartment developments in today's market.In this episode, we cover:-The Gold Coast vertical living boom and why it is accelerating-How developers structure, stage, and price new projects-The psychology behind staged releases and off-the-plan marketing-The risks buyers overlook when purchasing off the plan-How to assess whether a development truly stacks up before committingConnect with me!Instagram → https://www.instagram.com/matt_srama/?hl=enTikTok → https://www.tiktok.com/@matt_sramaLinkedIn → https://www.linkedin.com/in/matthew-srama-b8647b1ba/?originalSubdomain=auBGC Podcast → https://open.spotify.com/show/6ZTdnNdku9iVdkBEFrND8VConnect with Carly & MotivInstagram → https://www.instagram.com/carlycottam_motiv/?hl=en LinkedIn → https://www.linkedin.com/in/carly-cottam-8466082a/?originalSubdomain=au Motiv Website → https://motivgroup.com.au/ Motiv Instagram → https://www.instagram.com/motiv_projectmarketing/ Subscribe to Buying Gold Coast for weekly insights into Gold Coast property, buyer strategy, prestige real estate, and real-world guidance for navigating the Gold Coast high end real estate market.
January 26, 2026: Air Taxis and Vertical AerospaceKeep your eyes on the skies, because electric vertical takeoff and landing aircraft are about to be taking off.Not quite an airplane and not quite a helicopter, an eVTOL is perhaps best described as a piloted drone that carry passengers. They ascend straight up during takeoff, are quieter than a refrigerator, give off zero emissions, and can reach top speeds of 200 miles per hour.eVTOLs are a unique new form of transportation that could reduce traffic congestion in densely-populated areas and are gaining regulatory clearances in both the US and abroad. Several companies are already conducting pilot programs that have been partially-funded by airlines and automakers.On Monday's show, MyWallSt's founder Emmet Savage and I discuss how this new industry is reaching a higher altitude.Our stock of focus was Vertical Aerospace (NYSE: EVTL) a fascinating innovator who's also much less inexpensive than its other eVTOL peers.⚠️ Not financial advice. Do your own research before investing.#evtol #watchlist #stockpicks #dividends #chipstocks #marketing #7investing #investing2026 #techinvesting
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
This episode wraps up our Technology Modernization theme with a Siemens perspective that feels very grounded in what factories are actually dealing with right now. Brian Albrecht and Louis Hughes from the Siemens XD team walk through what they are seeing in the field across brownfield and greenfield conversations, why executives keep asking for industrial AI before the foundations are ready, and what it really takes to turn messy plant data into something you can trust for analytics, operations, and eventually AI enabled workflows.A big thread in this conversation is that modern manufacturing is not blocked by ambition, it is blocked by readiness. Everyone wants faster decisions, fewer surprises, and higher uptime, but the path there usually starts with boring work that is not optional. Data transparency across machine, plant, MES, and cloud layers. A clear definition of what real time actually needs to mean for a given use case. And a plan to contextualize and orchestrate data so that AI does not get fed junk inputs. Brian and Louis explain how they approach those early customer conversations, how workshops turn vision into prioritized use cases, and why trust, pilots, and repeatability matter more than flashy demos when you are working in regulated or high consequence environments.If you have been hearing nonstop AI buzz but you are still wrestling with legacy controls, inconsistent tags, documentation that no one can find, and seven layers of security constraints, this episode is for you. We get into practical use cases like AI vision and anomaly detection, LLMs for tribal knowledge and troubleshooting workflows, and the idea of fast versus slow AI, meaning AI that must act during production versus AI that can analyze after the fact.Timestamps00:00 Welcome and why this episode closes the modernization theme02:10 Meet Brian Albrecht and Louis Hughes from the Siemens XD team05:25 Vertical differences across oil and gas, discrete, and process manufacturing07:50 What executives ask for right now beyond AI, factory of the future and data transparency10:50 Brownfield reality and why most modernization work starts with legacy systems12:30 The AI conversation when foundations are missing, meeting customers where they are15:10 Current AI use cases in manufacturing, downtime, throughput, LLMs, and vision18:10 What it means to be AI ready, data silos, contextualization, and orchestration23:50 Fast versus slow AI and why production time decisions are different from analytics25:30 Edge versus cloud architecture, latency, and where the data should live33:40 Cybersecurity, trust, and why perception can lag behind the technology36:50 Hallucinations, guardrails, and why recommendations usually come before automation51:10 Book recommendations, career advice, and future predictions for industrial AIAbout the hostsVlad Romanov is an electrical engineer with an MBA from McGill University and over a decade of experience in manufacturing and industrial automation. He has worked across large scale environments including Procter and Gamble, Kraft Heinz, and Post Holdings, and he now leads Joltek, helping manufacturers modernize systems, improve reliability, strengthen IT and OT architecture, and upskill technical teams through practical training and on site enablement.Dave Griffith is the cohost of Manufacturing Hub and an industrial automation practitioner who focuses on how modern technologies translate into real factory outcomes, from controls and data foundations to scalable implementation strategies.About the guestsBrian Albrecht started in electrical engineering and spent about a decade in systems integration in Oklahoma City focused on oil and gas, building SCADA, networking, and automation solutions and leading teams delivering real world projects. He now works with Siemens customers on building relationships and delivering solutions that create measurable value.Louis Hughes has roughly 20 years of manufacturing experience, starting in software development for manufacturing and engineering applications, then moving into solution architecture, services delivery, and experience center leadership. He now leads a smart manufacturing team, bringing a software and systems view into automation conversations focused on solving customer problems, not just deploying tools.Joltek Services - https://www.joltek.com/servicesContact Joltek - https://www.joltek.com/contactReferenced in the episodeProveIt Conference - https://www.proveitconference.com/Siemens - https://www.siemens.com/Crossing the Chasm by Geoffrey A Moorehttps://en.wikipedia.org/wiki/Crossing_the_ChasmExtreme Ownership by Jocko Willink and Leif Babinhttps://en.wikipedia.org/wiki/Extreme_Ownership
⭐️⭐️برای شنیدن دنتکست ۱۵۴ در سایت رسمی اینجا کلیک کنید⭐️⭐️❌❌❌در این اپیزود به یکی از بحثبرانگیزترین موضوعات در بازسازیهای پروتزی میپردازیم:تغییر Vertical Dimension of Occlusion (VDO) و باورهای رایج پیرامون آن.این قسمت بر اساس یک مقالهی جدید علمی تهیه شده و به بررسی انتقادی برخی تصورات تثبیتشده در مورد افزایش VDO میپردازد.
Thanks to our Partners, Pico Technology and AutelWatch Full Video EpisodeMatt answers listener emails about oscilloscope aliasing—what it is, whether all scopes can do it, and how it can trick you into diagnosing failures that aren't real. Using a “pegboard and golf tees” mental model, he explains how a digital storage oscilloscope samples voltage, stores it in memory, and then reconstructs what you see on-screen. The key takeaway: aliasing isn't magic, it's math—specifically the relationship between sample rate, timebase, and memory buffer. He also explains why some scopes (especially Snap-on) behave differently than Pico-style workflows, and how misunderstanding that screen-to-buffer relationship can create fake-looking “dropouts.”Who This Episode Is ForAnyone using a handheld/PC-based automotive DSO (Pico, Snap-on, Autel, etc.)Techs chasing intermittent cutouts, crank/cam dropouts, injector events, CAN glitchesAnyone who has ever said: “The waveform looked wrong… but the fix didn't fix it.”Key Topics CoveredWhat aliasing is (in plain language): the scope fails to accurately reconstruct the waveform you're testing.Can all oscilloscopes alias? The spicy answer is yes, they all can—especially digital scopes—depending on setup and limitations.Analog vs. digital (audio analogy): Digital sampling is like digital audio—there are “samples,” and reconstruction depends on how well you capture the real signal.The “pegboard model” for DSO operation: Up/down holes = voltage levels (vertical resolution). Left/right holes = time positions (sample points in memory). The scope measures voltage, then “plants a peg” in memory and connects the dots.Vertical resolution vs. time performance: 8-bit can look stair-steppy. 12/16-bit improves vertical accuracy. But most real-world failures come from time-domain limitations (sample rate + memory dynamics)Sample rate vs. buffer size (why scopes “fall apart”): Put too little time on screen → not enough samples to define the signal. Put too much time on screen → scope rejects/skips samples because the buffer can't hold it all. Either way: the displayed waveform can become fiction.How aliasing creates “phantom dropouts”: Gaps that look like crank sensor dropouts or reluctor issues. Can send you straight into the diagnostic swampWhy Pico changed the game: Early Pico automotive scopes stood out because they brought big memory buffers to real shop problems. Capture longer events accurately, then zoom in for detailSnap-on screen/buffer behavior is different (and people get burned): Snap-on scope often shows a “window” into a buffer (buffer bar flying across). You don't “zoom in like Pico”; you effectively set detail first, capture the event, then zoom out to find it and return to your detail level. Misunderstanding this is a common cause of “dropouts” that are really aliasing/misuseThe Big TakeawaysAliasing can make a good tech chase a bad story.The waveform on-screen is an interpretation, not a photograph.Know your scope's strengths: Some are built for speed, some for memory, some for both—but your settings decide your fate.If you're hunting an intermittent: Your success depends on matching: expected event speed, sample rate, memory depth, the scope's display/buffer behavior.Practical “In-the-Bay” TipsIf the trace shows perfectly suspicious gaps: question your timebase, question your effective sample rate, verify with a different capture strategy (less time on screen, more sample rate, different scope mode)Don't trust a dropout unless: it repeats consistently under the same conditions, and you can capture it without stretching timebase beyond what your scope can support.Learn...
Investor Fuel Real Estate Investing Mastermind - Audio Version
In this episode of the Real Estate Pros podcast, host Michelle Kesil speaks with LaVonne Idlette, who runs a vertically integrated real estate investment firm in Florida. LaVonne shares her journey in the real estate industry, discussing the challenges and successes of building a business that focuses on lending, development, and affordable housing. She emphasizes the importance of belief, adaptability, and networking in achieving business growth and overcoming obstacles. The conversation also touches on the impact of the pandemic on business operations and the significance of community development. Professional Real Estate Investors - How we can help you: Investor Fuel Mastermind: Learn more about the Investor Fuel Mastermind, including 100% deal financing, massive discounts from vendors and sponsors you're already using, our world class community of over 150 members, and SO much more here: http://www.investorfuel.com/apply Investor Machine Marketing Partnership: Are you looking for consistent, high quality lead generation? Investor Machine is America's #1 lead generation service professional investors. Investor Machine provides true 'white glove' support to help you build the perfect marketing plan, then we'll execute it for you…talking and working together on an ongoing basis to help you hit YOUR goals! Learn more here: http://www.investormachine.com Coaching with Mike Hambright: Interested in 1 on 1 coaching with Mike Hambright? Mike coaches entrepreneurs looking to level up, build coaching or service based businesses (Mike runs multiple 7 and 8 figure a year businesses), building a coaching program and more. Learn more here: https://investorfuel.com/coachingwithmike Attend a Vacation/Mastermind Retreat with Mike Hambright: Interested in joining a "mini-mastermind" with Mike and his private clients on an upcoming "Retreat", either at locations like Cabo San Lucas, Napa, Park City ski trip, Yellowstone, or even at Mike's East Texas "Big H Ranch"? Learn more here: http://www.investorfuel.com/retreat Property Insurance: Join the largest and most investor friendly property insurance provider in 2 minutes. Free to join, and insure all your flips and rentals within minutes! There is NO easier insurance provider on the planet (turn insurance on or off in 1 minute without talking to anyone!), and there's no 15-30% agent mark up through this platform! Register here: https://myinvestorinsurance.com/ New Real Estate Investors - How we can work together: Investor Fuel Club (Coaching and Deal Partner Community): Looking to kickstart your real estate investing career? Join our one of a kind Coaching Community, Investor Fuel Club, where you'll get trained by some of the best real estate investors in America, and partner with them on deals! You don't need $ for deals…we'll partner with you and hold your hand along the way! Learn More here: http://www.investorfuel.com/club —--------------------
If you're doing more but feeling stuck, the issue may not be effort—it may be the direction of your growth. In this episode of Healthy Mind, Healthy Life, host Sayan explores why high achievers plateau even as they collect more skills, goals, and credentials. Joined by Ryan Gottfredson, the conversation breaks down vertical development—upgrading your internal “operating system” (nervous system and identity)—so you can lead, decide, and perform with less strain and more impact. This is for professionals and leaders who want real progress without living in constant pressure. About the Guest: Ryan Gottfredson is a leadership coach and researcher focused on vertical development and mindsets. He's the author of Becoming Better: The Groundbreaking Science of Personal Transformation and shares practical tools like meditation and journaling. Key Takeaways: Separate “doing side” growth (skills) from “being side” growth (identity + nervous system). Use the tool-belt/iPad metaphor: add tools vs upgrade the operating system. Notice 4th-gear living: fast pace, high internal RPMs, higher burnout risk. “Shift gears” by letting go of the need to prove, be recognized, or never fail. If you avoid initiative due to fear of failure, more credentials won't fix it—inner work will. Start simple: meditation for regulation and a daily journaling habit for self-awareness. How to Connect With the Guest: https://ryangottfredson.com/ Want to be a guest on Healthy Mind, Healthy Life? DM on PM - Send me a message on PodMatch DM Me Here: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty, storyteller, survivor, and wellness advocate. With over 6000+ episodes and 200K+ global listeners, we unite voices, break stigma, and build a world where every story matters.
TOPIC: Lear Corp PANEL: Ray Scott, Lear Corporation; David Welch, Bloomberg; Gary Vasilash, shinymetalboxes.net; John McElroy, Autoline.tv
Ed, Simon and Harvey discuss everything that matters in the world of real estate marketplaces this week...- CoStar's ongoing investor spat and what the outcome might be- QuintoAndar's big push into Rio de Janeiro- Leboncoin shutters A Vendre A Louer- Rightmove's EBITDA problem00:00 Introduction and Weather Check01:48 CoStar's Board and Activist Investors04:01 The Role of Activist Investors09:44 CoStar's Strategic Options12:55 Homes.com and Compass14:28 Market Trends and AI Impact21:05 Quinto Andar's Investment in Brazil25:50 Le Bon Quant's Strategic Shift28:53 The Evolution of Brand Management in Real Estate32:46 Legal Challenges in the Real Estate Market37:16 Innovations in Real Estate Technology42:42 The Struggles of Real Estate Agents with Pricing Models
We've got a super cool episode lined up this time! We've got three amazing guests joining us. Luc Besson is the director of The Fifth Element, Lucy, and Léon: The Professional. We're diving deep into his latest movie, Dracula. We discussed the incredible artistry behind the movie, its captivating story, and the memorable characters they created. After that, we're going straight into the interview with Caleb Landry Jones and Zoë Bleu! We discuss the power of love, their experience filming the movie, and the story's deeper meaning. Buckle up for this great conversation and make sure to check out Dracula this Friday! Vertical will release DRACULA in theaters nationwide on February 6th, 2026 Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textIn this high-octane episode of Sidecar Sync, Amith and Mallory cover an ambitious trio of AI developments with massive implications for associations. They dive into Kimi K2.5, a Chinese open-source model built for multimodal agent swarms that rival GPT-5.2 at a fraction of the cost. Then, they explore Claude's new domain-specific plugins for Cowork and what it means for associations when Big AI moves into vertical markets like legal and finance. Finally, they unpack Elon Musk's latest megamerger: SpaceX and xAI joining forces to launch AI data centers into orbit. Whether it's AI agents that run teams of themselves or compute infrastructure leaving Earth altogether, this episode challenges assumptions and encourages leaders to rethink what's possible.
This week, we welcomed SAP's Hagen Heubach to discuss the EV battery supply chain, covering risks, production strategies, data, AI, and recycling. We also explored why software is increasingly becoming a crucial differentiator for automakers Download the episode transcript ===== In this episode, we were joined by SAP's Hagen Heubach to dive into the full lifecycle of EV batteries, from raw material sourcing and plant strategy to quality, second-life use, and recycling. We discussed data, digital twins, industry collaboration, and AI, and examined how in-car software and autonomous driving are reshaping competitive advantage in automotive supply chains. ===== Guest 1: Hagen Heubach, Chief Marketing Officer for Supply Chain Management, SAP Hagen Heubach joined SAP in 2007 and has held numerous leadership positions. Prior to his current role, he was leading Discrete Industries at SAP, holding the end-to-end business and solution responsibility for Automotive, Industrial Manufacturing, Hight Tech and Aerospace and Defense. He was living for multiple years in Japan and was driving SAP business expansion for Asia Pacific.Together with his team, he spearheads product marketing, strategic customer engagements, demand generation, thought leadership, and the commercialization of SAP's supply chain portfolio. Host 1: Richard Howells, SAP Richard Howells has been working in the Supply Chain Management and Manufacturing space for over 30 years. He is responsible for driving the thought leadership and awareness of SAP's ERP, Finance, and Supply Chain solutions and is an active writer, podcaster, and thought leader on the topics of supply chain, Industry 4.0, digitization, and sustainability.Host 1: Oyku Ilgar, SAP Oyku Ilgar is a marketer and thought leader specializing in SAP's digital supply chain and ERP solutions since 2017. As a marketer, blogger, and podcaster, she creates engaging content that highlights innovative SAP technologies and explores key topics including business trends, AI, Industry 4.0, and sustainability. She holds dual bachelor's degrees in Finance & Accounting and English Translation, along with a master's degree in Business Administration and Foreign Trade, specializing in marketing. With her background in digital transformation, Oyku communicates technology trends and industry insights to help professionals navigate the evolving business landscape. ===== Show Links: Supply Chain Management: SAP Supply Chain Management SAP Insights: Supply Chain Follow Us on Social Media Hagen Heubach: LinkedInRichard Howells: LinkedInOyku Ilgar: LinkedIn SAP Digital Supply Chain: LinkedIn Please give us a like, share, and subscribe to stay up-to-date on future episodes! ===== Chapters: 00:00:00: Intro00:01:02: Guest introduction00:01:18: Challenges of EV battery supply chains00:04:58: How are manufacturers balancing all of these challenges?00:10:15: Vertical integration vs outsourcing in battery value chains00:14:02: Zero-life reliability concept explained00:16:32: Second-life applications for used EV batteries00:17:21: Data, digital twins and supply chain orchestration00:21:22: Sustainability, battery passports and dismantling factories00:24:27: Future technologies, 1,000 km range and AI in battery lifecycle00:27:16: What's the future of the supply chain?00:28:18: Outro
Ed and Simon are joined by Slava Rodriguez, Real Estate Vertical Director at Larixon Classifieds, a company that operates portals across five very diverse markets.They explore market entry strategies, growth metrics, competition, and the role of social media in real estate marketing. The conversation also delves into branding strategies, managing operations across multiple markets, and the potential impact of AI on the real estate landscape.00:00 Introduction to Larixon Classifieds02:56 Market Strategy and Growth07:20 Real Estate Vertical Insights11:17 Competition in Diverse Markets13:57 Market Dynamics and Parallels16:00 Leveraging Social Media Marketing19:54 Brand Building and Marketing Strategies23:52 Managing Multiple Markets27:13 Trends in Classifieds and Future Outlook32:41 Impact of AI on Real Estate40:55 Transitioning from Corporate to Local Markets
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
What if the real driver of digital disruption isn't technology, but unit economics? In this episode of Technovation, Peter High speaks with Dan Gill, Chief Product Officer of Carvana, about how disciplined unit economics power one of the most ambitious e-commerce models in retail. Rather than leading with engineering for its own sake, Carvana focuses relentlessly on eliminating friction, capturing profit pools, and reinvesting those economics back into customer value. Key highlights from the episode: Vertical integration and competitive advantage Deterministic, self-service digital experiences Proprietary platforms vs. off-the-shelf tools AI-human collaboration at scale
Sandy recently came across an article in Variety that she thinks could spark your creativity for your next screenplay, short film, or vertical series. We also take a moment to revisit the March 2024 episode of 'Don't You Forget About John Hughes,' where Terry and Sandy dive into four inspiring quotes by the legendary screenwriter, producer, and director John Hughes. As one of Hollywood's most successful and influential talents, John Hughes has played a key role in launching the careers of stars like Andrew McCarthy, Molly Ringwald, Anthony Michael Hall, Macaulay Culkin, Judd Nelson, and John Candy.Executive Producer Kristin OvernCreator/Executive Producer Sandy AdomaitisProducer Terry SampsonMusic by Ethan StollerCheck out our fantastic sponsor, Novelium:https://novelium.so
“We need $2 from everyone so James can ship this dang mug.”That one line set the tone. A live stream, a packed chat, and a running joke about an $83.95 shipping fee to send a ceramic coffee mug from California to the UK. People laughed. People gave. And the room felt alive.That small, funny moment reminded everyone why live video works. It's real. It's messy. It's human. And when the community rallies, even a mug gets its moment.Melanie states, “…I'm easing my way back into streaming after a long break. Gear was in totes, cameras unplugged, and more cords than sense. But I missed the conversation — the unscripted moments, the people in the chat, the chance to be human-to-human. If you're thinking about getting back in or just trying to make your creator time count, here's the approach that's working for me: focus on the trifecta, protect the experience while you monetize, and design interactions that build culture, not noise.”The Trifecta: Short Form, Long Form, and LiveThe content trifecta is simple and non-negotiable if you want momentum: short form (discoverability), long form (depth and subscribers), and live (community and connection). Each serves a distinct purpose in the funnel.* Short form — Reels, Shorts, micro-clips. Top-of-funnel discovery. Pull the best two-minute nuggets from longer sessions and publish them where new people hang out.* Long form — Edited, produced episodes that show depth and keep people subscribing. This is where you explain ideas, build trust, and convert casual viewers into fans.* Live — The place to be raw, responsive, and relational. Live is where culture forms; viewers become participants and the audience helps make the show.I believe in doing all three, but you don't have to perfect every channel overnight. Experiment widely, then narrow in once you know where your people are and how they like to engage.Monetization: Respect the ExperienceAds and monetization are part of the creator economy. You should be paid for your work, but consider timing. Ads that interrupt a live conversation frustrate viewers and break the flow.One practical tactic: turn ads off while you're live and switch monetization back on immediately after the stream ends. That keeps the live experience clean and preserves the post-live revenue opportunity.Multi-Aspect Streaming: Vertical vs HorizontalPlatforms are trying to serve both quick-consumption vertical audiences and longer-form horizontal audiences simultaneously. That sounds great in theory, but it creates two different viewer experiences — and therefore two separate comment streams.If you're streaming to both vertical and horizontal feeds, you need a plan for each. Vertical is optimized for quick consumption — think TikTok or Instagram-style attention. Horizontal still wins for long-form conversation and comment engagement.* Vertical: bold visuals, tight framing, quick hooks. Comments can feel sparse and lonely compared to horizontal.* Horizontal: room for overlays, comments, richer production elements, and fuller audience interaction.As a producer, this multiplies configuration work. As a creator, think about where your core community lives and which format serves them best. If you're starting, cast a wider net. If you're established, pick one home base and make it great.Substack and the Newsletter-as-PlatformNewsletters have evolved. Substack in particular is no longer just email — it's becoming a social layer, podcast host, and even a live destination. You can push livestreams to Substack, embed videos in posts, and host premium tiers for people who want to comment and interact more deeply.Why this matters:* Control — You own your list and can create gated experiences without building complex tech.* Proof of authenticity — Embedding short, raw video clips inside a written post adds human proof that you are the person behind the content.* Monetization options — Substack takes a cut of paid subscriptions, but it handles payments, tiers, and distribution.Production: Keep It Scrappy, Not CrappyHigh production value helps, but content and energy win. Pat McAfee didn't become a phenomenon by upgrading every camera; he did it by staying authentic, consistent, and building a show people care about.Be scrappy. Just don't be crappy.Practical lighting and camera tips that won't break the bank:* Use a simple three-point setup: key light, soft fill, and a subtle hair or backlight to separate you from a dark background.* Control hot spots. If your forehead or scalp catches too much light, try lowering intensity, diffusing the light, or using a light grid to direct output.* A little mattifying powder or anti-shine product is a creator hack for reducing glare on camera.* Keep background practicals (lamps, RGB bulbs) subtle so the set feels moody without distracting the viewer.Community and Culture: The Real Competitive MoatCommunity isn't just about numbers. It's about culture. The way you moderate comments, which comments you surface, and how you respond shapes the environment people want to return to. Live video is the most powerful place to build culture because it creates back-and-forth connection in real time.Small audiences can be intimate and powerful. Learn names, call people out when appropriate, and reward contribution. If your community grows large, gated or paid rooms are a natural next step for bringing intimacy back.Examples of community strategies* Host a public 30-minute live and then a 20-minute members-only deep dive.* Clip the best live moments into short-form content for discovery, and link back to the longform episode or newsletter post.* Use merch or small gestures (signed items, shoutouts) to reinforce belonging.Simple Checklist to Return to Live (or Start One)* Decide your home base platform: where will the majority of your community experience you?* Map the trifecta: plan one longform episode, three short clips, and one live session per week or month.* Set monetization rules: ads off during live, ads on after; or enable memberships for exclusive interaction.* Optimize minimal production: key light, hair light, subtle background color, and a microphone that picks up voice cleanly.* Capture and clip: use an automatic clipping tool or your recording setup to pull shared short-form assets post-stream.* Schedule an outreach cadence: newsletter, social posts, and short clips to funnel viewers into the live room.Parting ThoughtGetting back into streaming doesn't require a full studio overhaul. Show up, be human, and keep the experience respectful for the people who choose to spend time with you. Start scrappy, iterate quickly, and protect the moments that matter. If you do that consistently, the rest — discovery, subscribers, and revenue — follow as feedback that you're on the right path.Thank you to everyone who tuned into my live video! Join me for my next live video in the app. Get full access to the Digital Collective at digitalcollective.media/subscribe
✔️ Hopium: Bitcoin price, lower means opportunity, Bitcoin should be $1.5 million, Bitcoin settlement squeeze incoming! ✔️ Zoom attack✔️ Samourai✔️ Sources:► Hopiumhttps://x.com/sminston_with/status/2016253863280701796?s=52&t=CKH2brGypO5fEYTgQ-EFhQhttps://x.com/bitcoinmagazine/status/2016447672148738313?s=52&t=CKH2brGypO5fEYTgQ-EFhQhttps://x.com/david_eng_mba/status/2016322981623206255?s=52&t=CKH2brGypO5fEYTgQ-EFhQhttps://x.com/luke_broyles/status/2016212791036657792?s=52&t=CKH2brGypO5fEYTgQ-EFhQ► Zoom Attack Update https://x.com/metamick14/status/2016556122379804764?s=52&t=CKH2brGypO5fEYTgQ-EFhQ► Cool Stuffhttps://x.com/supertestnet/status/2016617203278180775?s=52&t=CKH2brGypO5fEYTgQ-EFhQhttps://github.com/Roasbeef/lnd/blob/017299fe6f3aec3d8c7ece84c383a47da59862f0/docs/forwarding_history_privacy.md► DONATE TO HELP KEONNE AND BILL https://www.change.org/p/stand-up-for-freedom-pardon-the-innocent-coders-jailed-for-building-privacy-tools✔️ Check out Our Bitcoin Only Sponsors!► https://archemp.co/Discover the pinnacle of precision engineering. Our very first product, the bitcoin logo wall clock, is meticulously machined in Maine from a solid block of aerospace-grade aluminum, ensuring unparalleled durability and performance. We don't compromise on quality – no castings, just solid, high-grade material. Our state-of-the-art CNC machining center achieves tolerances of 1/1000th of an inch, guaranteeing a perfect fit and finish every time. Invest in a product built to last, with the exacting standards you deserve.► Join Our telegram: https://t.me/theplebunderground#Bitcoin #crypto #cryptocurrency #dailybitcoinnews #memecoinsThe information provided by Pleb Underground ("we," "us," or "our") on Youtube.com (the "Site") our show is for general informational purposes only. All information on the show is provided in good faith, however we make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability, or completeness of any information on the Site. UNDER NO CIRCUMSTANCE SHALL WE HAVE ANY LIABILITY TO YOU FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF THE SHOW OR RELIANCE ON ANY INFORMATION PROVIDED ON THE SHOW. YOUR USE OF THE SHOW AND YOUR RELIANCE ON ANY INFORMATION ON THE SHOW IS SOLELY AT YOUR OWN RISK.
Lak Lakshmanan had a successful career in Private Equity and Big Tech, but he realized he couldn't just "coach the game" while the rules were changing. He had to get back on the field play it. We discuss vertical AI, the "foolhardiness" required to start a company , the reality of the AI technology wave, and why sitting on the sidelines is the biggest risk of all.LinkedIn: https://www.linkedin.com/in/valliappalakshmananGenerative AI Design Patterns (book): https://amzn.to/45v0xBO
Greetings from Pilda,It is the coldest January in thirty years. Outside my window, the thermometer is dropping, and I have just finished hauling 150kg of heating pellets into the bunker. It is the perfect atmosphere for what we need to discuss today.For years, we have been documenting the slow, bureaucratic rot of the Russian Empire. We called it the “Rotting Cake”—a structure that looked impressive from the outside but was being eaten alive by corruption from within.Well, I have bad news. The cake is gone. The rats have finished it.We have now entered the Warlord Era.In this episode of The Eastern Border, we are no longer looking at a state. We are looking at a dark, geopolitical anime where the laws of physics and logic have been suspended. The “Vertical of Power” has collapsed into a collection of feudal tribes, each led by a warlord more insane than the last.The World Boss Wakes Up While the Kremlin was busy arguing about “traditional values,” the United States decided to play Grand Theft Auto. In the span of 48 hours, they kidnapped the President of Venezuela and announced plans to put a “Golden Dome” over a newly purchased Greenland. The “Multipolar World” is dead. The Sheriff is back in town, and he is drunk.The Grinch and the Cucumber Meanwhile, France—yes, France—has started seizing Russian oil tankers. Specifically, a tanker named “The Grinch” (you cannot make this up). The Russian response? Impotent rage on Telegram. Why? Because they can't afford to fight back. The “Cucumber Index” has spiked 30% in three weeks. The economy is eating itself.The Spiritual Psychosis But the real horror is in the trenches. We read the diaries of soldiers who are trapped in the “Red Marker Trap”—dying in villages that their commanders have already marked as “captured” on the map. We meet the “Shashlik Commandos”—elite bureaucrats who go to the rear to eat barbecue and steal tractors. And we listen to the “Berserkers” like Yuri Yevich, who are now screaming that the war is actually against “Satanic Space Bears” and that the only solution is… ice axes.The State is dead. Long live the Tribe.In this episode:The Global Humiliation: From Maduro to the Grinch.The Techno-Rot: Why the “Toilet Warlords” can't build drones but can brick your iPhone.The Meat Grinder: Strelkov's letter from prison revealing that Putin has killed more men in one village than WWII.The Escape: Why the “Good Russians” are finding the doors to the West locked tight.Happiness is Mandatory.
The Government's new Warm Homes plan has been set up to help people switch to eco-friendly heating systems for their homes, such as air source heat pumps and solar panels. The scheme will provide £15 billion in grants. The five year plan has been welcomed by the energy industry, but some see it as a missed opportunity. We ask the Rural Services Network what's in it for rural communities.Vertical farming has been hailed as one of the ways to feed the world's growing population, expected to increase to nearly ten billion by 2050. Crops are grown indoors, under artificial light. This method of farming can use less water and fewer chemicals than outdoor crops but does use a lot of energy. We find out why Scotland's Rural College has just opened its own full-scale vertical farm in Edinburgh.Making sure that a farm is sustainable and productive over the long-term - without ruining the environment or damaging wildlife - can be expensive. A young food activist and a farmer explore how this can be done with environmentalists and campaigners for better food networks.Presenter = Anna Hill Producer = Rebecca Rooney
AI is hitting entertainment like a sledgehammer ... from algorithmic gatekeepers and AI-written scripts to digital actors and entire movies generated from a prompt.In this episode of TechFirst, host John Koetsier sits down with Larry Namer, founder of E! Entertainment Television and chairman of the World Film Institute, to unpack what AI really means for Hollywood, creators, and the global media economy.Larry explains why AI is best understood as a productivity amplifier rather than a creativity killer, collapsing months of work into hours while freeing creators to focus on what only humans can do. He shares how AI is lowering barriers to entry, enabling underserved niches, and accelerating new formats like vertical drama, interactive storytelling, and global-first content.The conversation also dives into:• Why AI-generated actors still lack true human empathy• How studios and IP owners will be forced to license their content to AI companies• The future of deepfakes, guardrails, and regulation• Why market fragmentation isn't a threat — it's an opportunity• How China, Korea, and global platforms are shaping what comes next • Why writers and storytellers may be entering their best era yetLarry brings decades of perspective from every major media transition — cable, streaming, global expansion — and makes the case that AI is just the next tool in a long line of transformative technologies.If you care about the future of movies, television, creators, and culture, this is a conversation you don't want to miss.⸻
Series: EssentialsWeek 4: Make an ImpactBig Idea:Making an impact isn't about equal amounts—it's about equal surrender.Following Jesus expands our capacity, grows our generosity, and calls us to join His mission with our whole lives.Transformation is not static. When we follow Jesus, our lives should be growing—growing in freedom, love, generosity, and impact. As we walk with Him, God increases our capacity to love and serve others. What once felt exhausting begins to feel joyful, because grace grows as obedience grows.Jesus confronts fake spirituality that looks holy on the outside but consumes people on the inside. In Luke 20, He warns against leaders who love status, attention, and appearance while exploiting the vulnerable. God is not impressed by public spirituality that lacks private sacrifice. The church is not built on image, influence, or talent—it's built on integrity and sacrifice.Immediately after this warning, Jesus highlights what real devotion looks like. In Luke 21, He watches people give at the temple treasury. Wealthy individuals give out of abundance, but a poor widow gives two small coins—everything she has. Jesus declares that she gave more than all the others, redefining generosity not by amount, but by sacrifice and trust.The widow didn't just give money—she gave security. She didn't give comfortably—she gave sacrificially. Making an impact isn't about how much you give; it's about how much of yourself you surrender.But giving is not the finish line—mission is. God didn't save us to be donors; He saved us to be disciples. Some people give money without joining the mission. Others serve faithfully but withhold generosity. Real surrender shows up somewhere—for all of us. Time, talent, and treasure are all part of a surrendered life.Jesus' final command wasn't “stay and sit,” but “go and make disciples.” And He didn't send us alone—He promised His presence with us always. The mission comes with power and promise.Vertical exists because people matter to God. And if people matter to God, they matter to us. We don't live for comfort—we live to make an impact. We don't just fund the mission—we join it.
In this episode, we dive deep into the subtle yet powerful world of energetic receptors—those invisible gateways that allow your soul, body, and spiritual support teams to receive light, guidance, energy, and even physical nutrients. When blocked, these receptors can lead to physical fatigue, emotional imbalance, or a sense of disconnection from your essence.We'll go through a guided 2-step energetic healing protocol designed to clean and repair your energetic receptors across all dimensions. Whether you are struggling with nutrient absorption, low energy, or simply feel off-balance, this episode offers a high-vibrational, restorative practice to bring your entire system back into alignment. Main Topics:- Signs of receptor blockages- Impact of energetic flow on physical and spiritual health- Importance of multidimensional healing- Vertical alignment and integration- Energetic resilience and clarity- Connection to the light of your essenceThis is a segment from Aleya's coaching sessions. To join her live online coaching sessions, click on the link below - https://www.aleyadao.com/catalog/products/Live-Coaching-Sessions/721/Get a free month of the Cups of Consciousness meditations at https://www.7cupsofconsciousness.com/Follow along on social media for more insights and updates!
A big thank you to FeedSpot as we were selected as one of the TOP 35 High School Coaching Podcasts.https://podcast.feedspot.com/high_school_coach_podcasts/Join US Website: www.coachsedge.coachEmail: contact@cramerbasketball.comCamps: www.Cramerbasketball.comOnline Training: https://cramerbasketball.mypthub.net/3/p/133059Twitter.com/coachsedge1Twitter.com/cramerbballFacebook.com/cramerbasketballYoutube.com/cramerbasketballInstagram.com/cramer_basketballBasketball coach basketball podcast basketball strategyPlayer development zone offense zone defense pressing pressure defense programbuilding team defense pack line defense baseline defense zone defense 1-3-1defense basketball united slapping glass coaching tips teach hoops how to coachbasketball basketball podcast youth basketball basketball camps basketballimmersion training basketball shooting tips basketball conditioning shootingdrills ball handling drills passing drills basketball drills basketballworkouts basketball drills youth basketball basketball drills open gymsshootouts scrimmages man to man defense basketball officials and deep dive refssports psychology shot selection dribble drive offense shooting coach freethrows athletic development no middle defense
Welcome to The Hangar Z Podcast, brought to you by Vertical HeliCASTS, in partnership with Vertical Valor Magazine.In this two-part series, we go international and speak with Hangar Z co-hosts and international instructors. The panel includes Harald Brink, chief tactical flight officer from the Dutch national police; Lt Clay Lacey from the Texas Department of Public Safety; and Jon Duke, a U.K. military aviation veteran and contributor to Vertical and Vertical Valor magazines. This conversation goes beyond borders to examine the common ground and critical differences in air support operations between the U.S. and Europe. From suspect behavior that looks the same in Dallas as it does in The Hague, to the training that makes agency operations effective and efficient.We talk about the intense task saturation of the tactical flight officer—a job we argue is the most difficult in all of law enforcement—and the crucial safety threshold of simulator training, plus the tactical impact of flying single-engine helicopters versus the big twins.Check out the article "Aerial Pursuit Across Continents" by Jon Duke, who joins us to help cross-pollinate these incredible conversations into print. You can find the article in the Fall issue of Vertical Valor magazine. Visit VerticalMag.com to read more about this collaboration that came to life, alongside this recording!Thank you to our sponsors Bell, CENTUM and Metro Aviation.
Claude isn't just the hottest name in AI right now—it's quietly ushering in a wave of ultra-personalized, vibe-coded software that could transform how we build and use tech in 2026. Find out why coders and normies alike are obsessed and what's coming next. Claude Cowork Exfiltrates Files Behind the Curtain: The AI future has arrived Our Algorithmic Grey-Beige World Musk wants up to $134B in OpenAI lawsuit, despite $700B fortune Elon Musk's Grok 'Undressing' Problem Isn't Fixed I asked Grok's AI to undress me after X's new limits. It's still easy on the app. Senate passes Defiance Act for a second time to address Grok deepfakes Instagram says it fixed the issue that sent password reset emails Instagram AI Influencers Are Defaming Celebrities With Sex Scandals French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban TikTok has quietly launched a new micro drama app called PineDrama as the category picks up steam in the US Creator income inequality is rising as top influencers rake in big paydays from brands Managers on alert for "launch fever" as pressure builds for NASA's Moon mission Senate passes minibus bill funding NASA, rejecting Trump's proposed cuts Pentagon Device Linked To Havana Syndrome Zuck#: A programming language for connecting the world. And harvesting it Amazon is buying copper harvested by bacteria for its data centers 'Are You Dead?' is now the top paid App Store app in China Danish dev delights kid by turning floppy drive into easy TV remote Cinemark will fill almost anything with popcorn this weekend—but there's a catch Happy Birthday, Wikipedia: We need you now more than ever Pluralistic: The world needs an Ireland for disenshittification (17 Jan 2026) Host: Leo Laporte Guests: Harper Reed and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/twit redis.io zscaler.com/security monarch.com with code TWIT
Welp. That was wild.
Claude isn't just the hottest name in AI right now—it's quietly ushering in a wave of ultra-personalized, vibe-coded software that could transform how we build and use tech in 2026. Find out why coders and normies alike are obsessed and what's coming next. Claude Cowork Exfiltrates Files Behind the Curtain: The AI future has arrived Our Algorithmic Grey-Beige World Musk wants up to $134B in OpenAI lawsuit, despite $700B fortune Elon Musk's Grok 'Undressing' Problem Isn't Fixed I asked Grok's AI to undress me after X's new limits. It's still easy on the app. Senate passes Defiance Act for a second time to address Grok deepfakes Instagram says it fixed the issue that sent password reset emails Instagram AI Influencers Are Defaming Celebrities With Sex Scandals French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban TikTok has quietly launched a new micro drama app called PineDrama as the category picks up steam in the US Creator income inequality is rising as top influencers rake in big paydays from brands Managers on alert for "launch fever" as pressure builds for NASA's Moon mission Senate passes minibus bill funding NASA, rejecting Trump's proposed cuts Pentagon Device Linked To Havana Syndrome Zuck#: A programming language for connecting the world. And harvesting it Amazon is buying copper harvested by bacteria for its data centers 'Are You Dead?' is now the top paid App Store app in China Danish dev delights kid by turning floppy drive into easy TV remote Cinemark will fill almost anything with popcorn this weekend—but there's a catch Happy Birthday, Wikipedia: We need you now more than ever Pluralistic: The world needs an Ireland for disenshittification (17 Jan 2026) Host: Leo Laporte Guests: Harper Reed and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/twit redis.io zscaler.com/security monarch.com with code TWIT
Claude isn't just the hottest name in AI right now—it's quietly ushering in a wave of ultra-personalized, vibe-coded software that could transform how we build and use tech in 2026. Find out why coders and normies alike are obsessed and what's coming next. Claude Cowork Exfiltrates Files Behind the Curtain: The AI future has arrived Our Algorithmic Grey-Beige World Musk wants up to $134B in OpenAI lawsuit, despite $700B fortune Elon Musk's Grok 'Undressing' Problem Isn't Fixed I asked Grok's AI to undress me after X's new limits. It's still easy on the app. Senate passes Defiance Act for a second time to address Grok deepfakes Instagram says it fixed the issue that sent password reset emails Instagram AI Influencers Are Defaming Celebrities With Sex Scandals French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban TikTok has quietly launched a new micro drama app called PineDrama as the category picks up steam in the US Creator income inequality is rising as top influencers rake in big paydays from brands Managers on alert for "launch fever" as pressure builds for NASA's Moon mission Senate passes minibus bill funding NASA, rejecting Trump's proposed cuts Pentagon Device Linked To Havana Syndrome Zuck#: A programming language for connecting the world. And harvesting it Amazon is buying copper harvested by bacteria for its data centers 'Are You Dead?' is now the top paid App Store app in China Danish dev delights kid by turning floppy drive into easy TV remote Cinemark will fill almost anything with popcorn this weekend—but there's a catch Happy Birthday, Wikipedia: We need you now more than ever Pluralistic: The world needs an Ireland for disenshittification (17 Jan 2026) Host: Leo Laporte Guests: Harper Reed and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/twit redis.io zscaler.com/security monarch.com with code TWIT
Welcome to The Hangar Z Podcast, brought to you by Vertical HeliCASTS, in partnership with Vertical Valor Magazine.In this two-part series, we go international and speak with Hangar Z co-hosts and international instructors. The panel includes Harald Brink, chief tactical flight officer from the Dutch national police; Lt Clay Lacey from the Texas Department of Public Safety; and Jon Duke, a U.K. military aviation veteran and contributor to Vertical and Vertical Valor magazines. This conversation goes beyond borders to examine the common ground and critical differences in air support operations between the U.S. and Europe. From suspect behavior that looks the same in Dallas as it does in The Hague, to the training that makes agency operations effective and efficient.We talk about the intense task saturation of the tactical flight officer—a job we argue is the most difficult in all of law enforcement—and the crucial safety threshold of simulator training, plus the tactical impact of flying single-engine helicopters versus the big twins.Check out the article "Aerial Pursuit Across Continents" by Jon Duke, who joins us to help cross-pollinate these incredible conversations into print. You can find the article in the Fall issue of Vertical Valor magazine. Visit VerticalMag.com to read more about this collaboration that came to life, alongside this recording!Thank you to our sponsors Bell, CENTUM and Precision Aviation Group.
Claude isn't just the hottest name in AI right now—it's quietly ushering in a wave of ultra-personalized, vibe-coded software that could transform how we build and use tech in 2026. Find out why coders and normies alike are obsessed and what's coming next. Claude Cowork Exfiltrates Files Behind the Curtain: The AI future has arrived Our Algorithmic Grey-Beige World Musk wants up to $134B in OpenAI lawsuit, despite $700B fortune Elon Musk's Grok 'Undressing' Problem Isn't Fixed I asked Grok's AI to undress me after X's new limits. It's still easy on the app. Senate passes Defiance Act for a second time to address Grok deepfakes Instagram says it fixed the issue that sent password reset emails Instagram AI Influencers Are Defaming Celebrities With Sex Scandals French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban TikTok has quietly launched a new micro drama app called PineDrama as the category picks up steam in the US Creator income inequality is rising as top influencers rake in big paydays from brands Managers on alert for "launch fever" as pressure builds for NASA's Moon mission Senate passes minibus bill funding NASA, rejecting Trump's proposed cuts Pentagon Device Linked To Havana Syndrome Zuck#: A programming language for connecting the world. And harvesting it Amazon is buying copper harvested by bacteria for its data centers 'Are You Dead?' is now the top paid App Store app in China Danish dev delights kid by turning floppy drive into easy TV remote Cinemark will fill almost anything with popcorn this weekend—but there's a catch Happy Birthday, Wikipedia: We need you now more than ever Pluralistic: The world needs an Ireland for disenshittification (17 Jan 2026) Host: Leo Laporte Guests: Harper Reed and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: ZipRecruiter.com/twit redis.io zscaler.com/security monarch.com with code TWIT
We have merch now and you can buy it HERE: https://artistsonartistsonartistsonartists.com/shopMove over soap operas, there's a new hot format in town! That's right, vertical microdramas, the movies broken into bite-sized video clips for those who want romance and over the top drama on the toilet or waiting in the pickup line for your kids, are here! This week, we sat down with the creatives behind one of the industry's big names, Shortie for Life, and heard their takes on the industry's evolution, their favorite tropes, and the unique challenges and joys of this format. So get back on your phone, strap in for 120 episodes, and listen in!This episode was filmed in the beautiful Dynasty Typewriter Theater, and tech-produced by Samuel Curtis. For live shows and events you can find more about them at dynastytypewriter.com.To learn more about the BTS of this episode and to find a world of challenges, games, inside scoop, and the Artists being themselves, subscribe to our Patreon! You won't be disappointed with what you find. Check out patreon.com/aoaoaoapod Artists on Artists on Artists on Artists is an improvised Hollywood roundtable podcast by Kylie Brakeman, Jeremy Culhane, Angela Giarratana, and Patrick McDonald. Produced by Laservision Productions. Music by Gabriel Ponton. Edited by Conner McCabe. Thumbnail art by Josh Fleury. Hollywood's talking. Make sure you're listening. Subscribe to us on Apple Podcasts, Spotify, and Youtube! Please rate us five stars!