Independent agency of the United States Federal Government
POPULARITY
Categories
We're going back to the Moon. The planned March 2026 launch of Artemis II is the first crewed mission to the moon since 1972. Historic as it is, it isn't the only lunar event creating a stir at NASA. Two seismometers are to be delivered to Schrödinger's Crater in a mission called The Farside Seismic Suite, in which the instruments will measure moonquakes and record the possible impact of asteroid 2024 YR4 on lunar surface. Meanwhile, studies of the sun are heating up. The so-called PUNCH mission, a four-satellite constellation that will create an image of the sun's corona and solar winds, may help us better understand what drives solar storms and how we can protect Earth from their energetic blasts. Guests: Eugene Cernan – Apollo 17 astronaut Harrison "Jack" Schmitt – Geologist and Apollo 17 astronaut Andrew Rivkin – Planetary astronomer at the Applied Physics Laboratory at Johns Hopkins University Ceri Nunn – Lunar seismologist and planetary scientist, NASA's Jet Propulsion Laboratory Ryan French – solar physicist, at the Laboratory for Atmospheric & Space Physics, Boulder, Colorado, and author of “Space Hazards: Asteroids, Solar Flares and Cosmic Threats” Craig DeForest – Heliophysicist, Southwest Research Institute, principal investigator on NASA's PUNCH mission Featuring music by Dewey Dellay and Jun Miyake Big Picture Science is part of the Airwave Media podcast network. Please contact advertising@airwavemedia.com to inquire about advertising on Big Picture Science. You can get early access to ad-free versions of every episode by joining us on Patreon. Thanks for your support! Learn more about your ad choices. Visit megaphone.fm/adchoices
Dante Lauretta explains how after a 2007 rejection he refined the science objectives, coining the name OSIRIS-REx, then assumed leadership after Mike Drake's passing and guided the team through a critical 2014 confirmation review to secure NASA approval.
A new crew of four astronauts has arrived at the International Space Station for an eight-month science mission. They'll study everything from bacteria to plants, all while helping NASA prepare for future trips to the Moon and Mars. Meanwhile, SpaceX successfully launched 24 new Starlink satellites on Feb. 14 from California. The Falcon 9 rocket's first stage landed safely on a drone ship in the Pacific Ocean.Investigators have found DNA at Nancy Guthrie's property that does not belong to her or anyone close to her, as the search for her enters its third week.President Donald Trump and Israel's Prime Minister Benjamin Netanyahu agreed to ramp up pressure on Iran, targeting oil exports to China. Meanwhile, the U.S. military is preparing for possible weeks-long operations.U.S. investors are suing the South Korean government over one of its leading e-commerce platforms. This comes as an international controversy escalates.An Italian ice dance couple is enjoying their final Olympic Valentine's Day. Details on their love story that began more than 16 years ago.
Pete Cherry VK2LP the Returning Officer of the WIA. - John VK4JPM, President of the Darling Downs Radio Club accepts the HamChallenge. - BBC launch shortwave radio service for listeners in Iran. - NASA no longer planning a February launch of Artemis 2. - Bush to Beach with Alan VK2COD.
NASA's updated technology policies. This instructional slide deck is delivered through the fictional persona of "Bob The Cyber-Guy," a friendly digital expert depicted in an accompanying illustration. The primary focus is the historic decision to allow astronauts to bring personal mobile devices on high-profile lunar and orbital missions. Key sections of the proposal examine the psychological benefits of staying connected to family and the technical challenges posed by battery safety in space. Ultimately, the materials demonstrate how integrating consumer electronics helps modernize space travel and improve the daily lives of crews.This was created by NotebookLM based on my created information in conjunction with Gemini AIWatch my video on this topic at: https://youtu.be/S5p3hmr5Vgk
Bob Zimmerman covers ESA's fast-tracked Apophis asteroid mission, a commercial attempt to rescue a NASAtelescope, and the contrasting regulatory environments of the UK and New Zealand for space launches.
Watch the YouTube version of this episode HEREAre you a busy law firm owner who doesn't have much time to think about the success of your business? In this episode of the Maximum Lawyer Podcast, Tyson explores the transformative power of setting aside dedicated time for deep thinking and proactive problem-solving within law firms. Drawing inspiration from organizations like NASA and SpaceX, he encourages law firm leaders to regularly schedule uninterrupted time to strategize, test, and implement solutions. Prioritizing time to think as a law firm owner is very important to ensure you run your business successfully. Tyson shares some insights on why setting time aside to think is challenging but important. Most times, a lot of people are multitasking and trying to get multiple things done in a short amount of time. Though this might be your reality, it is important as a law firm owner to put time aside to think about how things are going for your firm. Especially if you are wanting to make your firm better. You need to think about how that will happen. If you take dedicated time to think about it, you can bring it to your team to get ideas flowing and start working towards that idea or goal.Thinking time needs to be scheduled and communicated to your team in order for it to work. Figure out the time of day where you are the sharpest. Maybe it's a 1 hour block in the morning, a quick 20 minute block in the early afternoon or right before bed. Decide what time works best for thinking and put it in your calendar. It is crucial to communicate this thinking time to your team so they don't disrupt it to ensure you can focus. Also, any ideas that come from this dedicated thinking time should be provided to your team so they know what you are expecting of them.Listen in to learn more!2:26 The Challenge of Prioritizing Thinking Time5:45 Proactive Problem-Solving vs. Reactive Management13:42 Scheduling and Communicating Thinking Time16:43 Common Traits of Successful Law Firms18:51 The Power of Focused, Uninterrupted Problem Solving Tune in to today's episode and checkout the full show notes here.
A compilation of Astrum's best content investigating the search for water across the cosmos. We explore NASA's missions hunting for ice in the Moon's deepest craters, and dive into the liquid oceans under the surface of Saturn's icy moons. Find out why NASA wants to explore the deepest oceans on Earth, and where the hunt for life's most vital resource will take us next. ▀▀▀▀▀▀Astrum's newsletter has launched! Want to know what's happening in space? Sign up here: https://astrumspace.kit.comA huge thanks to our Patreons who help make these videos possible. Sign-up here: https://bit.ly/4aiJZNF
Putin coup fears grow as he activates 'kill switch' on security forces Iran's attempt to intimidate Trump backfired Something bizarre is unfolding on Jupiter and it's no longer acting like a planet Homeland Security filmed a UFO over Puerto Rico - then it hit the ocean SpaceX and NASA launch astronauts to relieve bare-bones crew at ISS Chinese rocket falls on their own city! Navy fires commanding officer of destroyer USS Mason Gleisa 710: The Passing Star That Will Stir the Solar System Morning brief: Trump signals Iran deal, CIA targets PLA, Ukraine disrupts Russian Starlink US troops blasted through steel doors 'like it was like papier-mache' to snatch Maduro, Trump says Japan angers China by seizing fishing boat Doomsday fish washes ashore in California, is a massive earthquake coming? Why this asteroid feels wrong and scientists say something is off Russia warns of military response if Greenland becomes US base This rocket could reach Mars in 10 days Venezuela oil sales top $1 billion, funds won't go to Qatar account anymore, Energy Secretary says Ilia Malinin Olympics catastrophe: 'Quad God' falls twice, doesn't medal Russian SA-17 missile makes mid-air U-turn and destroys its own launcher Trump prepares imminent military strike in second country within a month Russia just laid out its Ukraine war endgame — here's what Moscow actually wants
Send a textFor the first time since 1972, humans are leaving Earth orbit and heading back to the Moon. Artemis II isn't about landing. It's about proving we're ready to go back to deep space. In this episode, Wayne breaks down: • What Artemis II is really designed to test • Why this mission is fundamentally different from Apollo • The international partnerships shaping the Artemis program • How Artemis II clears the path to Artemis III and a return to the lunar surfaceJoin us as we explore what Artemis II means for the future of NASA, international space exploration, and humanity's next giant leap. Don't forget to like and subscribe, your support helps bring astronomy to more people every month.Contact: AstroGuyPodcast@gmail.com Text/Voicemail: (973) 404-0380 Links: Feel free to buy us a cup of coffee or two! We really appreciate it! https://tinyurl.com/AstroGuyCoffee Our Facebook group page: https://www.facebook.com/groups/astro... Affiliate Links High Point Scientific: https://www.highpointscientific.com/?... Amazon: https://amzn.to/4gFQmOG Audio Credits: Hymn to the Dawn By Scott Buckley Under the Sun By Keys of Moon Adrift Among Infinite Stars By Scott Buckley www.scottbuckley.com.au Music promoted by https://www.chosic.com/free-music/all/ Creative Commons CC BY 4.0 Creative Commons CC BY 3.0 https://creativecommons.org/licenses/...
Investigators in Nancy Guthrie investigation zero in on a key detail, the Texas Supreme court weighs a case regarding Gender surgeries that could send ripples nationwide, and Space X sends another crew to the ISS. Get the facts first with Evening Wire. - - - Ep. 2632 - - - Wake up with new Morning Wire merch: https://bit.ly/4lIubt3 - - - Today's Sponsor: Lean - Get 20% off when you enter code WIRE at https://TakeLean.com - - - Privacy Policy: https://www.dailywire.com/privacy morning wire,morning wire podcast,the morning wire podcast,Georgia Howe,John Bickley,daily wire podcast,podcast,news podcast Learn more about your ad choices. Visit podcastchoices.com/adchoices
Dr. Edwin Krupp (Griffith Observatory) joins Conway to talk space news as NASA begins a practice countdown for its first crewed “moonshot” in more than 50 years. A big local milestone: Santa Ana River Trail Phase 3 officially opens with a ribbon cutting after a seven-year build. Then it’s a relatable social debate — are you really okay dining alone, or does it still feel weird? And a bizarre story to close: “coffee shop or strip club?” — a bikini club tied to a crackdown that led to 17 arrests. See omnystudio.com/listener for privacy information.
by UFO History Buff & Author, Charles Lear In 2019, U.S. Navy pilot Lt. Ryan Graves began speaking publicly about regular encounters by flight teams starting in 2013 with UFOs. On July 26, 2023, he testified before congress and said that on one occasion, two jets were forced to make evasive maneuvers to avoid a collision with an object he described as a clear sphere with a black cube inside. By the time of the hearing, the All-domain Anomaly Resolution Office headed by Dr. Sean Kirkpatrick had looked into these types of reports, and in May of 2023, Kirkpatrick informed NASA's UFO advisory council that AARO had about 800 instances of “metallic orb” UFOs. This is according to a January 24, 2024, Science Times article by Caleb White headlined “Cube in a Sphere UAP Could Be ‘Aliens' or ‘Next Generation' Spherical Drones, Pentagon Former UFO Chief Says.” Read more →*Note: audioblogs are now a cloned AI version of Martin's voice.
This episode's got a mix of current events and inspiring stories. We're covering the latest news, including a search for a missing person in Arizona, a NASA launch, and a Boston Marathon runner's incredible story. We're also talking about a Patriots player facing charges, the Olympic Games, and a Harvard event honoring an actress. Plus, we're sharing updates on a movie release, NBA All-Star Weekend, and a concert at Fenway Park. It's a packed episode with something for everyone.See omnystudio.com/listener for privacy information.
Send me a DM here (it doesn't let me respond), OR email me: imagineabetterworld2020@gmail.comThis interview is a replay of a 2-part series I did with survivor Cali Shai Bergandi back in 2021. All information said is more relevant today than ever and deserves a playback and re-listenLET'S BREAK THE INTERNET! This story is going to twist and turn and bend your mind in ways you won't be expecting, and you won't see the world the same after listening to Part 1 of this series featuring elite child sex trafficking, NASA, SRA, occult survivor, whistleblower, and SO much more, Cali Shai Bergandi. This is her first time speaking publicly about her life experiences and I encourage you all to give her your full attention as you take in what she is exposing...Cali's story is so complex. So complex that it was hard to even know where to begin. So, we started at the beginning and are going to compile a few episodes for you to take in the enormity of her personal experiences, allegations, name drops, and information she is exposing.Born into a high level occultic family, Cali's birth mother sold her to the elite before she was even born as part of the breeder programs that are a feature of satanic cults and her father functioned as a satanic serial killer in the dark and as a charming businessman working in the fruit business by day. And this is just the beginning of the enormous rabbit hold Cali is going to take us down over the next couple weeks...In this episode, we discuss Cali's 'upbringing' (aka: abuse), and we also dive into some experiences that we'll be covering more in depth on future episodes including personal experiences with Ep-stein, NASA child experiments, The Royal Family, MK ULTRA, being trafficked to sitting Presidents of the past and present, the FL fruit industry and political landscape (including the trafficking system), and SO much more than I can fit into this caption.I really encourage you to stop what you're doing and listen. This episode and Cali's words will undoubtedly awaken you to things you have not yet heard about, connect dots and connections that will bring 'Aha!' moments, take your breath away and leave you speechless (which you can see happen to me many times during this recording), and inspire you all at once to DO something now that you know what you know.Cali is a truther and her story is important. It would mean the world to us if you could help us get this story out by sharing far and wide, commenting, subscribing, and 'liking' this video. It's time to make survivors the new MSM and look to them as authorities on answering the many questions we have about the world so we can work together to give survivors like Cali the justice they deserve and create a better future for the next generations of children.CONNECT WITH EMMA:YouTube: https://www.youtube.com/@imaginationpodcastofficialRumble: https://rumble.com/c/TheImaginationPodcastEMAIL: imagineabetterworld2020@gmail.com OR standbysurvivors@protonmail.comMy Substack: https://emmakatherine.substack.com/BUY ME A COFFEE: https://www.buymeacoffee.com/theimaginationVENMO: @emmapreneurCASHAPP: $EmmaKatherine1204All links: https://direct.me/theimaginationpodcastSupport the show
Send me a DM here (it doesn't let me respond), OR email me: imagineabetterworld2020@gmail.comPart 2 of an interview series I did with Cali Shai Bergandi back in 2021 - this is a replay episode as the information is more relevant than ever that she was whistleblowing 5 years ago...Cali is back again for Part 2 as promised! I asked you all to break the internet with Cali's story... and you did just that, so I hope you enjoy this episode just as much! Cali is a survivor of elite child sex trafficking, occultism, SRA, and was involved in many government operations and MK Ul-tra experiments. She bravely shares her childhood experiences with us as we take a deeper dive into her father's mysterious death and the loss of her son in the first part of the show and evolve the conversation into more details surrounding Ep-stein, MK Ul-tra, Disney, The Royal Family (Princess Diana anyone?) and so much more...Cali is a truther and her story is important. It would mean the world to us if you could help us get this story out by sharing far and wide, commenting, subscribing, and 'liking' this video. It's time to make survivors the new MSM and look to them as authorities on answering the many questions we have about the world so we can work together to give survivors like Cali the justice they deserve and create a better future for the next generations of children.*The views, beliefs, and opinions expressed in this podcast are not necessarily the views,, beliefs, or opinions of the host or company. This platform exists to elevate voices of survivors and to have hard, unbiased, and unconventional conversationsCONNECT WITH EMMA:YouTube: https://www.youtube.com/@imaginationpodcastofficialRumble: https://rumble.com/c/TheImaginationPodcastEMAIL: imagineabetterworld2020@gmail.com OR standbysurvivors@protonmail.comMy Substack: https://emmakatherine.substack.com/BUY ME A COFFEE: https://www.buymeacoffee.com/theimaginationVENMO: @emmapreneurCASHAPP: $EmmaKatherine1204All links: https://direct.me/theimaginationpodcastSupport the show
-SpaceX blasts off from Cape Canaveral, sending four NASA astronauts soaring toward the International Space Station. -Georgia Rep. Rich McCormick slams Democrats for pushing a DHS funding shutdown. -Alabama Sen. Katie Boyd Britt delivers a forceful message on the Senate floor. -Sen. Ted Cruz torches Democrats and the liberal media over voter ID and election integrity narratives. -NEWSMAX's Carl Higbie reacts to Minnesota AG Keith Ellison's Senate testimony. -The FBI raises the reward for information on Nancy Guthrie's disappearance to $100,000. Today's podcast is sponsored by : NOBLE GOLD : With precious metals hitting all-time highs and economic uncertainty everywhere you look, this is the time to educate yourself. Download Noble Gold's free Wealth Protection Kit at http://NobleGoldInvestments.com/NEWSMAX Listen to Newsmax LIVE and see our entire podcast lineup at http://Newsmax.com/Listen Make the switch to NEWSMAX today! Get your 15 day free trial of NEWSMAX+ at http://NewsmaxPlus.com Looking for NEWSMAX caps, tees, mugs & more? Check out the Newsmax merchandise shop at : http://nws.mx/shop Follow NEWSMAX on Social Media: -Facebook: http://nws.mx/FB -X/Twitter: http://nws.mx/twitter -Instagram: http://nws.mx/IG -YouTube: https://youtube.com/NewsmaxTV -Rumble: https://rumble.com/c/NewsmaxTV -TRUTH Social: https://truthsocial.com/@NEWSMAX -GETTR: https://gettr.com/user/newsmax -Threads: http://threads.net/@NEWSMAX -Telegram: http://t.me/newsmax -BlueSky: https://bsky.app/profile/newsmax.com -Parler: http://app.parler.com/newsmax Learn more about your ad choices. Visit megaphone.fm/adchoices
A SpaceX Falcon 9 rocket launched NASA's SpaceX Crew 12 to the International Space Station (ISS). The crew are expected to dock on Valentine's Day. Arianespace successfully launched 32 Amazon Leo satellites from Europe's Spaceport in French Guiana. NASA and Vast have signed an order for the sixth private astronaut mission to the ISS, and more. Remember to leave us a 5-star rating and review in your favorite podcast app. Be sure to follow T-Minus on LinkedIn and Instagram. T-Minus Guest Our guest today is Greg Gillinger, SVP for Strategy & Development, Integrity ISR. Elysia Segal brings us the Space Traffic Report from NASASpaceflight.com. Selected Reading NASA's SpaceX Crew-12 Launches to International Space Station Arianespace successfully launches 32 Amazon Leo satellites with the first Ariane 64 NASA Selects Vast for Sixth Private Mission to Space Station Axiom Space Secures $350M in Financing to Accelerate Space Station, Spacesuit Development NRO Advances Multi-Phenomenology Remote Sensing Solutions Space Systems Command- Special Delivery: Valentine's Day eCards! Share your feedback. What do you think about T-Minus Space Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at space@n2k.com to request more info. Want to join us for an interview? Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal. T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
A newborn delivered after a pregnant woman is shot and killed in Flint has died, just two days after his premature birth following his mom’s death. The ex-wife of a NASA astronaut is sentenced to federal prison after admitting she lied to authorities by accusing her former spouse of committing what was once described as the first crime in space. Drew Nelson reports.See omnystudio.com/listener for privacy information.
The Space Show Presents Steve Wolfe, Tyler Bender, & The Beyond Earth Institute, Tuesday, Feb. 10, 2026Quick Summary:This Space Show program focused on promoting the upcoming Beyond Earth Symposium, scheduled for February 24-25 in Washington, D.C., which will explore creating a clear pathway to a space town and discuss Artemis program developments. The discussion covered NASA's authorization bill, commercial space station development, and the need for infrastructure to support a sustainable lunar presence. Key speakers included Steve Wolfe and Tyler Bender from Beyond Earth, who explained their organization's focus on policy and strategic thinking for human expansion into space. The conversation also touched on Jared Isaacman's leadership at NASA, the potential impact of China's space program, and the importance of developing cislunar space infrastructure. The symposium will feature approximately 50 speakers and include meals in the registration package, with a 30% discount available for attendees using the promo code BES30.Detailed Summary:Our program focused on the upcoming Beyond Earth Symposium, which will take place in Washington, D.C., at the Law School for American University from February 24th to 25th. Tyler Bender, the space policy industry analyst for Beyond Earth, introduced the symposium's theme of creating a clear pathway to a space town, discussing the evolution from space habitats to more permanent lunar settlements. Steve Wolfe, president and Co-founder of Beyond Earth, elaborated on the symposium's speakers, including George Whitesides, and highlighted the importance of the NASA authorization bill amendment supporting human expansion into space. The discussion also touched on the challenges of organizing a diverse group of speakers from different regions and the need for policy discussions on advancing a human space migration agenda.The symposium will feature discussions on lunar exploration, focusing on sustainable lunar presence rather than the race to be the first to return to the moon. Steve mentioned that the event will include audience Q&A sessions and panels led by experts who will explore lunar development plans and compare different lander systems. Tyler confirmed that meals are included in the symposium package. Space Show Wisdom Team participant Dallas emphasized the importance of the SpaceX Starship for establishing a lunar community due to its larger capacity compared to the Blue Origin HLS. David raised a question about the development of cislunar space, which Tyler and Steve noted would be addressed in the first panel.The Beyond Earth Institute, a non-profit think tank, aims to provide thoughtful policy and strategic guidance for human space exploration, focusing on creating permanent communities on the Moon, Mars, and beyond. Steve explained that while the Institute is policy-focused, it also considers technology and economic aspects, such as funding mechanisms and commercial development, to support space migration. He mentioned that the Institute has developed papers on financing options and has been advocating for a large-scale public-private partnership lunar research and development facility capable of housing up to 50 people.Wisdom Team member Ajay shared insights from his recent op-ed in the Space Review about lunar cargo transportation, highlighting the need for infrastructure development by 2028 and the limitations of current launch systems like Falcon Heavy and Starship. The group discussed the importance of focusing on infrastructure elements such as power, communications, navigation, and prospecting for building a lunar community, with Steve noting the recent commitment to a lunar space station. David inquired about trending shifts in congressional attitudes towards space policy, prompting Steve to reflect on the potential for policy to align with rhetoric and the support of constituents.The group discussed the increasing congressional interest in returning to the moon, driven by concerns about China's potential to surpass the US in lunar presence. They noted a shift towards commercial space activities, with companies like SpaceX and Blue Origin making vocal commitments to lunar missions. Dallas shared insights from an upcoming AIAA paper series on lunar sustainability, highlighting the importance of ISRU (In Situ Resource Utilization) and the need for practical applications rather than experimental tech demos. The conversation also touched on the process of selecting speakers for conferences, with Steve explaining their leadership council and working groups approach.Space Show program participants discussed funding challenges for mining robots, with Dallas explaining that their development was funded by mining companies but now they need to generate revenue through product sales. Steve clarified that the Beyond Earth Symposium is primarily a forum for discussion and networking rather than a workshop with specific outputs, though they have provided advice to the White House in the past. Ajay shared that he had received a call from Senator Rick Scott's office regarding his recent op-ed, and will meet with a staffer to discuss space policy and the Artemis program. The discussion concluded with an assessment of Jarod Isaacman's NASA leadership, with Tyler noting that while he started late, he shows genuine commitment to the Beyond Earth mission and NASA.The group also discussed NASA Administrator Bill Nelson's leadership and initiatives, including his efforts to bring more civil servants back into NASA and reduce reliance on contractors. They also discussed the recent elimination of the National Space Council by President Trump, with Michael Kratsios serving as the current space policy point man as he is the Trump administration Science Advisor. The conversation concluded with an announcement about the upcoming Beyond Earth Symposium in two weeks.Special thanks to our sponsors:American Institute of Aeronautics and Astronautics, Helix Space in Luxembourg, Celestis Memorial Spaceflights, Astrox Corporation, Dr. Haym Benaroya of Rutgers University, The Space Settlement Progress Blog by John Jossy, The Atlantis Project, and Artless EntertainmentOur Toll Free Line for Live Broadcasts: 1-866-687-7223 (Not in service at this time)For real time program participation, email Dr. Space at: drspace@thespaceshow.com for instructions and access.The Space Show is a non-profit 501C3 through its parent, One Giant Leap Foundation, Inc. To donate via Pay Pal, use:To donate with Zelle, use the email address: david@onegiantleapfoundation.org.If you prefer donating with a check, please make the check payable to One Giant Leap Foundation and mail to:One Giant Leap Foundation, 11035 Lavender Hill Drive Ste. 160-306 Las Vegas, NV 89135Upcoming Programs:Broadcast 4504 Zoom: Frank Pietronigro | Friday 13 Feb 2026 930AM PTGuests: Frank PietronigroZoom: Frank discusses the Zero Gravity Arts Commission and moreBroadcast 4506 Zoom Open Lines | Sunday 15 Feb 2026 1200PM PTGuests: Dr. David LivingstonOpen Lines discussion. All topics welcome Get full access to The Space Show-One Giant Leap Foundation at doctorspace.substack.com/subscribe
The Earth's atmosphere does a good job of protecting humanity from space weather, however, occasionally a major event does break through our shield and gets our attention. Tree rings and ice cores have recorded past space weather events thousands of time larger than which have occurred in the modern age. investing in research seems wise.
In this episode of The Gospel of John, we step into the moment where Jesus' growing ministry overlaps with John the Baptist's ongoing work. Centered on John 3:22–24, this passage highlights a unique season where both ministries are active at the same time, setting the stage for deeper reflection on calling, purpose, and God's unfolding plan. Join us as we explore how this powerful transition points to the introduction of the New Covenant and the shifting focus from preparation to fulfillment in Christ.-------------------------------------------------------------------------------------DONATE: https://evidence4faith.org/give/WEBSITE: https://evidence4faith.org/NEWSLETTER: http://eepurl.com/hpazV5BOOKINGS: https://evidence4faith.org/bookings/CONTACT: Evidence 4 Faith, 349 Knights Ave Kewaskum WI 53040 , info@evidence4faith.orgMy goal is that their hearts, having been knit together in love, may be encouraged, and that they may have all the riches that assurance brings in their understanding of the knowledge of the mystery of God, namely, Christ, in whom are hidden all the treasures of wisdom and knowledge. - Colossians 2:2-3CREDITS: Developed & Hosted by Michael Lane. Produced & Edited by Isabel Kolste. Graphics & Publication by Isabel Kolste. Additional Art, Film, & Photography Credits: Stock media “Memories” provided by mv_production / Pond5 | Logo Stinger: Unsplash.com: Leinstravelier, Logan Moreno Gutierrez, Meggyn Pomerieau, Jaredd Craig, NASA, NOASS, USGS, Sam Carter, Junior REIS, Luka Vovk, Calvin Craig, Mario La Pergola, Timothy Eberly, Priscilla Du Preez, Ismael Paramo, Tingey Injury Law Firm, Dan Cristian Pădureț, Jakob Owens | Wikimedia: Darmouth University Public Domain, Kelvinsong CC0 | Stock media “A stately Story (Stiner02)” provided by lynnepublishing / Pond5
From leading R&D at a biotech startup company to conducting environmental monitoring for NASA, Veronica Garcia, Ph.D., Scientific Director of the ASM Applied and Environmental Microbiology unit shares how experiences throughout her career have informed her appreciation for microbes and their real-world applications. She also discusses how the ASM AEM unit will support scientists around the globe by fostering collaboration and advocating for scientific advancements in areas like climate change, water systems and food production. Ashley's Biggest Takeaways Prior to her role as Scientific Director for ASM Applied and Environmental Microbiology unit, Garcia was Senior Director of R&D at Boost Biomes, a biotech startup focused on bio-pesticides and bio-fertilizers. Garcia's passion for microbiology began studying soil remediation at Texas A&M University. Seeing microbes under the microscope for the first time felt like discovering "another world," sparking a lifelong fascination with what microbes are and can do. Driven by a desire to see her science make an immediate impact, Garcia was drawn to industry after completing her Ph.D. At Boost Biomes, a biotech startup company, Veronica helped transform diverse microbial isolates into bio-pesticides, bio-fertilizers and bio-stimulants for agriculture and food. She progressed from bench scientist to senior Director of R&D, overseeing discovery, genomics, bioinformatics and product development, and learned the realities of scale-up, cost, regulation and end-user needs. She also monitored air, water and surfaces for the shuttle and ISS and NASA, ensuring astronaut safety by tracking microbial loads and potential pathogens. ASM is organizing around 3 scientific units, ASM Applied and Environmental Microbiology (AEM), ASM Health and ASM Mechanism Discovery. These units will equip researchers to translate discovery into impact while providing a forum to collectively shape the future of the field. The AEM unit provides the space and unique expertise for microbial scientists and partners to directly contribute to a healthier, more sustainable world through applied and environmental innovation and brings together experts whose work connects microbial processes to outcomes in ecosystems, infrastructure, food systems and planetary health. Links For This Episode Learn More About ASM's Scientific Units. Join the Conversation on ASM Connect, our online community platform. Browse Volunteer Opportunities. Become an ASM Member. Take the MTM listener survey!
Karl and Erum break down how biology is transforming the production of everything from cosmetics to construction materials. They explore why the petrochemical era is giving way to biological manufacturing, examining both the spectacular failures of early biofuels and the emerging success stories of companies like K18 and Mango Materials. Karl and Erum explain the fundamentals of fermentation, precision fermentation, and cell-free manufacturing, while introducing concepts like distributed biomanufacturing and "dirty biology." Drawing on insights from previous guests including Doug Friedman, Michelle Stansfield, Veronica Breckenridge, and Phil Morle, they reveal why 95% of executives are now pursuing bio-solutions and how three converging forces—falling technology costs, rising consumer expectations, and new infrastructure—are making this the moment for biomanufacturing to finally deliver on its promise.Grow Everything brings the bioeconomy to life. Hosts Karl Schmieder and Erum Azeez Khan share stories and interview the leaders and influencers changing the world by growing everything. Biology is the oldest technology. And it can be engineered. What are we growing?Learn more at www.messaginglab.com/groweverything Chapters:(00:00:00) - Why AI might just become our CEO (plus haircuts, Pilates, and gene therapy for hearing loss)(00:02:05) - Eli Lilly's $1B gene therapy deal for hearing loss(00:05:00) - Long Now podcast recommendation and NASA astrobiologist Lynn Rothschild(00:07:00) - Discussion of Apple TV's Scion and Drops of God(00:11:00) - What is biomanufacturing and why does it matter?(00:13:00) - The history of petrochemicals as "green technology"(00:16:00) - The opportunity: removing gigatons of carbon and unlocking trillion-dollar markets(00:19:00) - Types of biomanufacturing: fermentation, precision fermentation, and continuous fermentation(00:22:00) - Cell-free manufacturing and plant cell bioreactors(00:26:00) - Growing products with mycelium and dirty biology approaches(00:29:00) - Why biomanufacturing has been hard: the valley of death(00:30:00) - The biofuels bust and lessons from 60 failed companies(00:34:00) - Infrastructure challenges and the capacity gap(00:36:00) - New solutions: performance over sustainability and the K18 example(00:40:00) - Orchestration beats invention: connecting the entire value chain(00:43:00) - Distributed biomanufacturing and making products from waste(00:48:00) - The bio-better reality: what consumers and CPG companies need(00:51:00) - Three forces converging to make biomanufacturing work now(00:53:00) - Quickfire questions: luxury vs. commodities, funding, and AI's roleLinks and Resources:Links and Resources DOCTopics Covered: biomanufacturing 101, industrial biotechnology, precision fermentation, continuous fermentation, cell-free biomanufacturing, distributed biomanufacturing, dirty biology, bio-based materials, performance vs sustainability, CPG reformulationHave a question or comment? Message us here:Text or Call (804) 505-5553Instagram / Twitter / LinkedIn / Youtube / Grow EverythingMusic by: Nihilore Production by: Amplafy Media
From the rolling hills of country Ireland to rolling waves beneath her boat docked in Hobart, Dr Diane Purcell has explored the most extreme places algae survive.She's also explored the prospect of its survival away from Earth when she worked at NASA studying extremophiles.Some of Diane's earliest research was looking at algae behaviour when it's kind of sleep deprived, and algae that will eat so much it will literally explode!She's also dealt with the ebb and flow of research work by moving to Darwin and working as a high school science teacher.Featuring:Dr Diane Purcell, Project Manager of the Remediation Section, Science and Technical Branch, at the Environment Protection AuthorityProduction:Ann Jones, Presenter / ProducerRebecca McLaren, ProducerHamish Camilleri, Sound EngineerThis episode of What the Duck?! was produced on the land of the Wadawarrung and Taungurung people.Find more episodes of the ABC podcast, What the Duck?! with the always curious Dr Ann Jones exploring the mysteries of nature on the ABC Listen app (Australia) or wherever you get your podcasts. You'll learn more about the weird and unusual aspects of our natural world in a quirky, fun way with easy to understand science.
Greek artist Ioannis Michaloudis credits his success to NASA, claiming he is what he is because of Stardust.
durée : 00:19:36 - Journal de 12h30 - Elle a pris son envol pour un long voyage. Sophie Adenot a décollé ce matin direction la Station spatiale internationale. Dans une fusée de SpaceX, accompagnée de deux Américains de la NASA, Jessica Meir et Jack Hathaway, et d'un Russe de Roscosmos, Andreï Fediaïev.
À 2h15 du décollage de Sophie Adenot pour la station spatiale internationale, le patron de la Nasa a évoqué son "enthousiasme" pour l'astronaute française, au micro de RTL. "C'est une collaboration passionante. Je suis impatient de voir son décollage", a-t-il déclaré.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Sophie Adenot vient de s'envoler pour la Station spatiale internationale (ISS). Avec l'Américaine Jessica Meir, elles sont les 65ème et 66ème femmes à quitter la Terre pour son orbite, contre plus de 500 hommes. La conquête spatiale ne s'est déclinée au féminin que petit à petit et il y a encore beaucoup à faire car seulement 11 % des astronautes sont des femmes, selon l'ONU. L'astronaute de réserve Meganne Christian, recrutée par l'ESA en 2022 dans la même promotion que Sophie Adenot, nous raconte la place des femmes dans les programmes spatiaux, la réalité des entraînements et comment l'ingénieure spécialiste des nanomatériaux qu'elle est était a été sélectionnée parmi 22 523 candidatures pour rejoindre ce programme d'élite.
Today on Astronomy Daily: Astronomers have witnessed something extraordinary in the Andromeda Galaxy — a massive star that simply vanished, collapsing into a brand-new black hole without the usual supernova fireworks. We cover the SpaceX Crew-12 launch to the ISS, Europe's powerful Ariane 64 flying for the first time with Amazon satellites aboard, another booster anomaly for ULA's Vulcan rocket, a bizarre inside-out planetary system that defies formation models, and NASA's plan to rescue the Swift observatory from orbital decay. Timestamped Chapters 00:00 — Welcome to Astronomy Daily 01:30 — SpaceX Crew-12 launches to the ISS 04:00 — Star vanishes in Andromeda — a black hole is born 08:30 — Europe's Ariane 64 flies for the first time 10:30 — Vulcan rocket suffers repeat booster anomaly 13:00 — The bizarre inside-out planetary system of LHS 1903 15:30 — NASA's Swift observatory fights for survival 17:30 — Sign-off and how to stay connected Key Links • NASA Crew-12 Blog: nasa.gov/blogs/commercialcrew • Vanishing Star Study (Science): doi.org/10.1126/science.adt4853 • Inside-Out Planet Study (Science): doi.org/10.1126/science.adl2348 • NASA Swift Observatory: nasa.gov/swift • Show Website: astronomydaily.io • Social Media: @AstroDailyPod on all platformsBecome a supporter of this podcast: https://www.spreaker.com/podcast/astronomy-daily-space-news-updates--5648921/support.Sponsor Details:Ensure your online privacy by using NordVPN. To get our special listener deal and save a lot of money, visit www.bitesz.com/nordvpn. You'll be glad you did!Become a supporter of Astronomy Daily by joining our Supporters Club. Commercial free episodes daily are only a click way... Click HereThis episode includes AI-generated content.
Elon Musk cambia rotta: la destinazione non è più Marte ma la Luna, dove vorrebbe addirittura costruire una fabbrica di satelliti. Con Luigi Bignami, giornalista scientifico ed esperto di Spazio, capiamo cosa c'è dietro questo nuovo annuncio, quali sono i programmi di SpaceX, della Nasa, di Blue Origin e come anche la Cina stia accelerando i suoi piani per arrivare con gli astronauti sulla Luna.Uno dei trend visti al CES di Las Vegas ad inizio anno riguarda l'evoluzione dei servizi e del software nel mondo dei televisori: la AI Television. Al di là delle promesse del marketing vediamo cosa c'è di concreto assieme a Paolo Centofanti, esperto di tecnologia della redazione di Dday.it.I MEMS (Micro Electro-Mechanical Systems) sono sensori e attuatori meccanici microscopici su chip di silicio che abilitano tecnologie fondamentali e di uso quotidiano come smartphone, airbag, dispositivi indossabili, medici, IoT, ecc. Con l'acquisizione della divisione MEMS di NXP, ST Microeletronics completa il proprio ventaglio di offerte e si conferma uno dei leader globali di questa fondamentale tecnologia. Ne parliamo con Simone Ferri, responsabile Area Mems di ST Microelectronics.E come sempre in Digital News le notizie di innovazione e tecnologia più importanti della settimana.
It's time for another bonus episode from the geeks. Hosted by Dave Rome, this episode is a dive into the world of torque wrench usage. Oh yes, it's time to get nerdy. Anyone who uses a torque wrench should find value in this episode that covers the do's and don'ts in using a torque wrench. To help with this topic, Dave is joined by Alex Boone, an aerospace engineer who works at NASA's Jet Propulsion Laboratory. Formerly a quality control engineer, and before that, a bike shop rat, Alex knows the ins and outs of using a torque wrench and how best to apply that in bicycle terms. For more on this topic, head on over to EscapeCollective.com for Dave's latest edition of Threaded that summarises and shows many of the concepts discussed within. The full version of this episode is only available to members of Escape Collective. Those on the free feed will hear approximately half the episode. If independent journalism matters to you, you want access to all that we offer (and without ads), or you just want a website that's not trash to look at, then please consider joining at escapecollective.com/geekwarning .
The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
https://www.youtube.com/watch?v=-Alz4UXGqLk From March 8, 2017. In just a few months, NASA's Cassini spacecraft is going to die, crashing into the planet Saturn. Let's look back across the mission's history. What were the highlights? What did we learn? Team: Fraser Cain - @fcain / frasercain@gmail.com Karla Thompson - @karlaii Chad Weber - weber.chad@gmail.com Ask me my favorite object in the Solar System, especially to see through a telescope, and my answer is always the same: Saturn. Saturn is this crazy, ringed world, different than any other place we've ever seen. And in a small telescope, you can really see the ball of the planet - you can see its rings. We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs. Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too! Every bit helps! Thank you! ------------------------------------ Do go visit http://www.redbubble.com/people/CosmoQuestX/shop for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations. Thank you! (Haven't donated? It's not too late! Just click!) ------------------------------------ The 365 Days of Astronomy Podcast is produced by the Planetary Science Institute. http://www.psi.edu Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.
In this riveting episode, we catch up with Dr. Jonathan Stock, Chief Scientist for Innovation at NASA's Intelligent Systems Division. We dive deep into the realms of geosciences and discuss how innovation can transform our understanding of the Earth and beyond. From quantum gravity gradiometers to AI-driven geophysical mapping, Dr. Stock reveals the tech that could redefine geospatial exploration. We also ponder why geosciences lag behind other fields in entrepreneurship and innovation and how cross-disciplinary collaborations could be the game-changers we need. Join us as we weave through tales of awe-inspiring geological discoveries and the frontier spirit that keeps the field exciting.Download the CampGeo app now at this link. On the app you can get tons of free content, exclusive images, and access to our Geology of National Parks series. You can also learn the basics of geology at the college level in our FREE CampGeo content series - get learning now!Like, Subscribe, and leave us a Rating!——————————————————Instagram: @planetgeocastTwitter: @planetgeocastFacebook: @planetgeocastSupport us: https://planetgeocast.com/support-usEmail: planetgeocast@gmail.comWebsite: https://planetgeocast.com/
Join EEG legend Jay Gunkelman (500,000+ brain scans read) and host Pete Jansons for a thorough exploration of Sensorimotor Rhythm (SMR) — the calming, stabilizing brainwave discovered by Barry Sterman.From cats trained on SMR that resisted toxic rocket fuel seizures (NASA origins) to modern uses in ADHD, epilepsy, insomnia, fibromyalgia, and arousal regulation — this episode breaks down the science, circuits, and clinical realities.✅ Key Topics Covered:Barry Sterman's breakthrough: SMR-trained cats survived rocket fuel doses that caused vomiting, panting, salivating, and seizures in controls (ruined the dose-response curve)Brain circuitry: Thalamus (ventroposterior lateral nucleus) + reticular nucleus (acetylcholine bursts) → sensory-motor cortex feedback → red nucleus quieting → muscle spindle relaxationSMR as daytime "sleep spindle": Stabilizes red nucleus (Parkinsonism target), cuts sympathetic drive, deeper muscle relaxation, reduces sensory feedback to thalamusBenefits: Epilepsy stabilization, fibromyalgia (quiets sympathetic input to red nucleus), ADHD clusters (excess theta/alpha, beta compensation), arousal-performance curve centeringRisks: Overtraining SMR drops arousal too far → underarousal/grogginess/rebound giddiness (like kids pre-bedtime); counter with anterior beta (17Hz functional beta on tasks)Arousal-performance: SMR = brakes (calms overarousal); beta = accelerator (fixes underarousal); no fixed sessions (10 for mild insomnia, 24+ for severe)ADHD insights: Frontal suppressor strip → caudate/putamen/globus pallidus/thalamus loop (excess GABA inhibition); beta magnitude increases (more events, not amplitude)
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Dr. Roni Avissar is an atmospheric scientist, helicopter pilot, and professor at the University of Miami. His research is crucial to the modern field of weather analysis. In a unique twist on the role weather plays in aviation, Roni conducts his field research using a specially-equipped helicopter, often flying extremely low to the surface to collect data. His work has been funded by NASA, NOAA, and the US Departments of Energy and Agriculture, among others.In this conversation, I'll speak with Roni about the turbulent world of weather forecasting, and our relationship as pilots to the natural world around us.
NASA is under new leadership with Jared Isaacman. Listen in as he answers questions from Aviation Week's Irene Klotz after eight weeks on the job in this special episode presented by Editor-in-Chief Joe Anselmo. --- Nominations are now open for the Space Tech Challenge Awards—could your solution be a winner? Find out more and apply here https://spacetechchallenge.aviationweek.com/ The Space Tech Challenge Awards connect execution-ready innovations with the government agencies, prime contractors, and commercial operators actively seeking them. From lunar operations to Mars missions, the space industry faces nearly 200 validated capability gaps. The Aviation Week Space Tech Challenge Awards recognize solutions already in development — prototypes tested and advancing toward deployment. Presented at Space Tech Expo USA, this program connects working technologies with government agencies, prime contractors, and commercial operators ready to integrate them.
Where did Earth’s water come from? In this episode of Planetary Radio, we explore how scientists are answering that question by studying a remarkably well-preserved record of the early Solar System: lunar samples brought back by the Apollo missions. Host Sarah Al-Ahmed is joined by Tony Gargano, postdoctoral fellow at the Lunar and Planetary Institute with the University Space Research Association and a research affiliate at NASA’s Johnson Space Center. Gargano studies lunar rocks and regolith to understand how planets form, evolve, and acquire key ingredients like water over time. By analyzing subtle chemical fingerprints preserved in Apollo-era lunar regolith, his work helps constrain how much water meteorites could have brought to Earth and what that means for our planet’s path to habitability. The episode also features a short bonus segment with actor George Takei, recorded at the Academy Museum of Motion Pictures during a screening of “Star Trek IV: The Voyage Home.” Takei reflects on the enduring legacy of “Star Trek,” its influence on generations of scientists and explorers, and why he is excited about humanity’s return to the Moon in the Artemis era. He connects science fiction’s hopeful vision of the future with the real science helping us understand our origins today. Discover more at: https://www.planetary.org/planetary-radio/2026-earth-water-apollo-moon-dustSee omnystudio.com/listener for privacy information.
Apple is scaling back its plans for its AI-based health coach service. Could Apple's next AirPods Pro come with cameras in them? The iPhone 17 Pro Max has the best battery life out of a plethora of other smartphones! And Apple's Lockdown Mode helped prevent the FBI from accessing a WaPo reporter's iPhone. Apple is scaling back plans for new AI-based health coach service. Apple's next AirPods Pro will come with cameras, says leaker. Leak suggests Apple's M5 Pro and M5 Max may be the same chip. NASA changes its mind, will allow Artemis astronauts to take iPhones to the Moon. Google & Apple CEOs offer seemingly contradictory statements regarding AI partnership. New Alexa's issues are already making some users return to old Siri. New Apple-backed AI model can generate sound and speech from silent videos. iPhone 17 Pro Max has the best battery life of 35 smartphones tested. Last week on my Mac: Why E cores make Apple silicon fast. FBI couldn't get into WaPo reporter's iPhone because it had Lockdown Mode enabled. Oura's FDA lobbying benefits Apple Watch, if everyone's smart about the risks. Apple Music Replay 2026 now live, here's how to find it. Ferrari's new Jony Ive–designed EV is swathed in glass and aluminum. Applications are now open for the 2026 Swift Student Challenge -- but hurry. Apple Arcade's 'Civilization VII' is good, but falls short of greatness Picks of the Week Dan's Pick: Ponies on Peacock Leo's Pick: Moody Andy's Pick: Hourly Comic Day 2026 Jason's Pick: Curling Hosts: Leo Laporte, Andy Ihnatko, and Jason Snell Guest: Dan Moren Download or subscribe to MacBreak Weekly at https://twit.tv/shows/macbreak-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
NASA's Artemis II mission, which will send humans around the moon for the first time in over five decades, could launch as early as March. This is part of a larger campaign to establish a long-term presence on the moon and eventually prepare for human space flight to Mars.Meanwhile, China also has a goal of landing humans on the moon by 2030, setting up a kind of modern space race. One reason for the rush: It's like a game of finders keepers, said Saadia Pekkanen, a professor focused on space law and policy at the University of Washington.
Podcast music by Koji KoburaA modern continuation of the serious research into the infamous Face on Mars, which had been more than 20 years since books were written & put forth by authors & researchers alike to show and detail about its anomalous nature & its perceived evidence of extraterrestrial origin and intelligent design on the surface of Mars.Despite all of the amazing details & facts presented by those authors & researchers to make a case for the Face being scientifically worthy for further discussion & continued visual research & study of the Face, NASA has claimed the Face of Cydonia has been successfully and scientifically debunked in 2007. The Face being claimed & officially asserted by NASA that it is only a natural object, with tricks of light & shadow showing no intelligent deliberateness regards to what is actually seen in image data. NASA had declared that the case for the Face was closed & was not willing or even interested to discuss the subject seriously, or investigate into the matter further.Sadly, this official NASA position of denial has not been challenged by any of the previous researchers & authors, or any new ones ever since then, to show that not only is the Case for the Face still scientifically valuable for continued debate & continued study of image data we had back then to study, but the vast amount of image data that we do have now available to us, clearly shows that the Case for the Face is far from closed, nor is it close to being over!Gary Leggiere — also known as The Mars Revealer — is an independent researcher and long-time investigator of the infamous Face on Mars. With over two decades of focused study, Gary has hosted radio shows, interviewed key anomaly researchers, and built one of the most extensive public archives dedicated to the topic.The Faces of Mars is his debut book — a culmination of decades spent challenging NASA's claims and uncovering suppressed image data that may point to artificial structures on the Red Planet.Gary is the founder of The Martian Revelation show, where he continues to speak publicly on planetary anomalies, space mysteries, and the search for intelligent life beyond Earth.For interviews or inquiries, email: marsrevealer at gmail dot comBecome a supporter of this podcast: https://www.spreaker.com/podcast/earth-ancients--2790919/support.
NASA's Artemis II mission, which will send humans around the moon for the first time in over five decades, could launch as early as March. This is part of a larger campaign to establish a long-term presence on the moon and eventually prepare for human space flight to Mars.Meanwhile, China also has a goal of landing humans on the moon by 2030, setting up a kind of modern space race. One reason for the rush: It's like a game of finders keepers, said Saadia Pekkanen, a professor focused on space law and policy at the University of Washington.
NASA paying for weed smokers, Trip Sitters, Christie's thoughts during Trumps speech, Top 10 Reasons why Ben Carson pulled out, Kanye's a pirate and You gotta warrant you go to jail with other news!
Apple is scaling back its plans for its AI-based health coach service. Could Apple's next AirPods Pro come with cameras in them? The iPhone 17 Pro Max has the best battery life out of a plethora of other smartphones! And Apple's Lockdown Mode helped prevent the FBI from accessing a WaPo reporter's iPhone. Apple is scaling back plans for new AI-based health coach service. Apple's next AirPods Pro will come with cameras, says leaker. Leak suggests Apple's M5 Pro and M5 Max may be the same chip. NASA changes its mind, will allow Artemis astronauts to take iPhones to the Moon. Google & Apple CEOs offer seemingly contradictory statements regarding AI partnership. New Alexa's issues are already making some users return to old Siri. New Apple-backed AI model can generate sound and speech from silent videos. iPhone 17 Pro Max has the best battery life of 35 smartphones tested. Last week on my Mac: Why E cores make Apple silicon fast. FBI couldn't get into WaPo reporter's iPhone because it had Lockdown Mode enabled. Oura's FDA lobbying benefits Apple Watch, if everyone's smart about the risks. Apple Music Replay 2026 now live, here's how to find it. Ferrari's new Jony Ive–designed EV is swathed in glass and aluminum. Applications are now open for the 2026 Swift Student Challenge -- but hurry. Apple Arcade's 'Civilization VII' is good, but falls short of greatness Picks of the Week Dan's Pick: Ponies on Peacock Leo's Pick: Moody Andy's Pick: Hourly Comic Day 2026 Jason's Pick: Curling Hosts: Leo Laporte, Andy Ihnatko, and Jason Snell Guest: Dan Moren Download or subscribe to MacBreak Weekly at https://twit.tv/shows/macbreak-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: zocdoc.com/macbreak
SHOW SCHEDULE 2-9-20261828 BANK OF ENFGLAND Guests: Bill Roggio and Husain Haqqani. Al-Qaeda has grown significantly since 9/11, maintaining a long-term vision for a global caliphate and establishing safe havens in Afghanistan and Syria, unlike the more isolated ISIS. Guests: Husain Haqqani and Bill Roggio. Al-Qaeda veteran Ahmed al-Shara's presidency in Syria highlights the group's diplomatic manipulation and Western naivety in accepting jihadists who adopt modern suits and polished personas. Guests: Ernesto Araujo and Alejandro Peña Esclusa. Conservatives gathered in Brussels to champion freedom of speech and consolidate the "Foro Madrid," a transatlantic alliance uniting Latin American and Europeanleaders against socialism. Guests: Ernesto Araujo and Alejandro Peña Esclusa. Venezuelan regime factions clash over detaining opposition figures, while Brazilian conservative Flavio Bolsonaro seeks international support to combat totalitarianism ahead of the upcoming national election. Guests: Bill Roggio and Jonathan Schanzer. Reports indicate Iran's regime has killed thousands to suppress ongoing unrest, feigning diplomatic willingness while maintaining a paranoid grip on power and refusing real concessions. Guests: Bill Roggio and David Daoud. Hezbollah leader Naim Qassem pledges loyalty to Iran, threatening asymmetric attacks on global U.S. assets if the "mothership" is struck, while organizing for Lebanese elections. Guests: Gordon Chang and Peter Huessy. China reportedly conducted secret underground nuclear tests to develop battlefield weapons for coercion, ignoring arms control treaties while the U.S. struggles to modernize its own deterrents. Guests: Gordon Chang and Brandon Weichert. NASA's Artemis 2 moon mission faces indefinite delays due to SLS rocket flaws, leading experts to urge replacing the bureaucratic program with SpaceX's efficient Starshipsystem. Guests: Bill Roggio and Bridget Tumi. The Houthis maintain improved military capabilities despite a temporary lull in attacks, remaining a persistent threat to Red Sea shipping and eager to support Iran if conflict erupts. Guests: Bill Roggio and John Hardie. Trilateral peace talks regarding Ukraine show limited progress on core issues, while Russia faces communication disruptions from Starlink denials and continues striking Ukrainianenergy infrastructure. Guests: Marianna Yarovskaya and Lyuba Sobol. Filmmaker Yarovskaya and activist Sobol discuss their documentary "Lyuba's Hope," highlighting the severe repression in Putin's Russia and the struggle of exiles fighting for democracy. Guests: Marianna Yarovskaya and Lyuba Sobol. Lyuba Sobol represents democratic Russian forces at the Council of Europe, aiming to delegitimize Putin, while facing continued threats and surveillance alongside other exiled activists. Guests: Bill Roggio and Ahmed Sharawi. Syrian leader Ahmed al-Shara secures resources by integrating the Kurdish SDF into his forces, while the U.S. watches for red lines regarding threats to Israel or regional stability. Guests: Bill Roggio and Edmund Fitton-Brown. The U.S. deploys military assets to pressure a defiant Iran, but the weakened regime refuses concessions to avoid looking vulnerable, relying on bluster and proxy distractions. Guest: Peter Berkowitz. Berkowitz argues that "National Conservatism," which seeks to root public life in a specific Christian vision, contradicts America's founding principles of religious pluralism and constitutional liberty. Guest: Craig Unger. Unger details Donald Trump's early alleged ties to Russian state security and the mob, beginning with the Commodore Hotel deal and continuing through real estate money laundering.E
What happens when your lifelong dream slips through your fingers—and you believe the door has permanently closed?In this powerful episode of The Mark Divine Show, NASA astronaut Anil Menon shares the untold story behind his journey to space—from repeated rejection, self-doubt, and believing the odds were zero… to rebuilding belief, training his mind, and ultimately earning his place among the world's most elite explorers.This conversation goes far beyond spaceflight.You'll learn:- How to rebuild confidence after failure- Why mental training matters more than talent- How belief reshapes behavior—and outcomes- What elite teams (NASA, SpaceX, SEALs) do differently under pressure- Why it's never too late to reopen a door you thought was shutAnil's story is proof that resilience isn't about grinding harder—it's about aligning purpose, belief, and disciplined mental training.If you've ever felt behind, doubted yourself, or questioned whether you missed your moment—this episode is for you.Want to train your mind like elite performers, leaders, and astronauts?