Podcasts about mars rovers

  • 755PODCASTS
  • 1,049EPISODES
  • 38mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 14, 2025LATEST
mars rovers

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about mars rovers

Show all podcasts related to mars rovers

Latest podcast episodes about mars rovers

About Space Today
Part 2 - Destination Mars

About Space Today

Play Episode Listen Later May 14, 2025 12:06


Did you know that we have rocks from Mars, and cam cam on a Mars Rover, plus Astronaut Buzz Aldrin says there's a monolith on a Martian Moon.  Join award winning broadcast journalists David Denault and John Gomez as they explore the Red Planet.

Poster Boys
"A Dog On Mars Who Can Drive A Four-Wheel Drive" | The Rover

Poster Boys

Play Episode Listen Later Apr 22, 2025 48:38


The Rover is a lot like Lady Gaga's ARTPOP, in that it could mean anything. Some of our guesses: it's about the Mars Rover; it's about a dog; it's about both the Mars Rover and a dog. But it turns out that The Rover is actually about Guy Pearce and Robert Pattinson both refusing to enunciate their lines while shooting their way through post-apocalyptic Australia on what IMDB very charitably refers to as a "road trip".Plus, Harry questions why admin still exists in the post-apocalypse, Stacey wonders if we're going be doing the podcast for the rest of our lives, and we both kind of just turn on each other.Don't forget to subscribe to Poster Boys, and leave us a comment and rating!Keen to see our beautiful faces? You can watch every episode on YouTube @posterboyspod!Get in touch with us at posterboyspod@gmail.com Follow Poster Boys onInstagram @posterboyspodTikTok @posterboyspod Hosted on Acast. See acast.com/privacy for more information.

small acts of rebellion
Aaron Brown: Designing for Impact

small acts of rebellion

Play Episode Listen Later Jan 29, 2025 41:15


In Episode 5 of Season 2, Aaron Brown, an associate professor of engineering at Colorado State University, shares his unconventional career journey spanning humanitarian engineering, renewable energy, and systems thinking. Starting as a self-described “marginal professional cyclist” racing in Italy, Aaron eventually transitioned to working on cutting-edge aerospace projects, including the Mars Rover landing mechanism.Despite reaching what many consider the pinnacle of engineering success, Aaron found himself unfulfilled. He pivoted toward humanitarian engineering, focusing on empowering underserved communities through sustainable technology solutions. From developing solar heating for low-income neighborhoods to 3D printing ventilator components during COVID-19, Aaron's work aligns technical expertise with social impact.He also opens up about navigating academia, the complexities of workplace culture shifts, and the importance of finding environments that align with personal values. This conversation explores the intersection of innovation, ethics, and impact, offering insights on trusting your instincts, recognizing when it's time to pivot, and applying creativity to solve real-world problems.For those interested in engineering for social good, career transitions, or aligning work with values, this episode delivers thought-provoking takeaways and inspiration.Guest Information:Linkedin - https://www.linkedin.com/in/aaron-brown-phd-13258615/References:Engineers Without Borders - https://ewb-usa.org/Veterans Without Borders - https://www.vwb.org/Soda Can Solar Heating Project - https://www.nprillinois.org/2014-03-06/soda-can-solar-furnace-helps-cut-heating-billsDon't forget to subscribe and leave a review if you enjoyed this episode.Credits and Acknowledgements:Hosted, Produced, and Edited by Heather Pridemore. https://www.linkedin.com/in/heather-pridemore-mba/Thank you for tuning into small acts of rebellion. Ready to start a revolution? Please share it with others who aspire to redefine success on their own terms.Don't forget to subscribe for more stories of personal and professional defiance. For additional content, follow us on Instagram @smallactsofrebellionpodcast & @PridemoreCoaching and visit us at PridemoreCoaching.com.Keep owning your story!

Agile Rabbit
Dr. Claire Newman | Weather on Mars

Agile Rabbit

Play Episode Listen Later Jan 2, 2025 34:01


Have you ever wondered what the weather is like on Mars? In this special live event, join a world-leading scientist who works on two Mars Rovers to find out. Dr. Claire Newman is a planetary atmospheric specialist who studies weather and climate on Mars. We take a closer look at what recent surface missions have taught us and why NASA scientists are so curious about the red planet. Claire shares weather reports from the Perseverance Rover which describe a tumultuous place of violent dust storms, desert landscapes, and wildly fluctuating temperatures. Together we explore the key differences and similarities between weather on Earth and Mars. CLAIRE NEWMAN Dr. Claire Newman is a planetary atmospheric scientist who works on weather and climate on Mars and Titan, specialising in the study of dust storms. She is a team member on the Mars Science Laboratory Curiosity rover, InSight Mars lander, and Mars 2020 Perseverance rover, as well as the upcoming Dragonfly Titan rotorcraft.

earth mars nasa weather perseverance mars rovers perseverance rover insight mars mars science laboratory curiosity claire newman
The John Batchelor Show
GOOD EVENING: The show begins in Ukraine with General Frost...

The John Batchelor Show

Play Episode Listen Later Dec 19, 2024 7:18


GOOD EVENING: The show begins in Ukraine with General Frost... CBS EYE ON THE WORLD WITH JOHN BATCHELOR 1919 Lemberg FIRST HOUR 9-915 #Ukraine:  General Frost dominates.  Colonel Jeff McCausland , USA (retired) @mccauslj @CBSNews @dickinsoncol 915-930 #SYRIA: Balkanized and fog of war. :  Colonel Jeff McCausland , USA (retired) @mccauslj @CBSNews @dickinsoncol 930-945 #REGULATION: Adds 30% to any price. John Cochrane, Hoover Institution 945-1000 #FRSNCE: Best moments for an Englishman living in the South of France. simon Constable, Occitanie SECOND HOUR 10-1015 #StateThinking: Anticipating POTUS-ELECT Syria to Russia..  @MaryKissel Former Senior Adviser to the Secretary of State. Executive VP Stephens Inc. 1015-1030 #StateThinking: Trump has a record of pushing back on the Kremlin.Mary Kissel Former Senior Adviser to the Secretary of State. Executive VP Stephens Inc 1030-1045 SPACEX: Starbase, Texas. Bob Zimmerman BehindtheBlack.com 1045-1100 #MARS: Rovers relentless. Bob Zimmerman BehindtheBlack.com THIRD HOUR 1100-1115 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist Hardcover – November 12, 2024 by  Richard Munson  (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments.8 pages of illustrations 1115-1130 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist Hardcover – November 12, 2024 by  Richard Munson  (Author) 1130-1145 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist Hardcover – November 12, 2024 by  Richard Munson  (Author) 1145-1200 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist Hardcover – November 12, 2024 by  Richard Munson  (Author) FOURTH HOUR 12-1215 PRC: Deflation: Anne Stevenson-Yang, author of Wild Ride: China's Short-Lived Experiment in Capitalism, on @GordonGChang, Gatestone, Newsweek, The Hill https://www.cnbc.com/2024/12/12/china-stresses-plans-to-boost-growth-at-top-agenda-setting-meeting.html 1215-1230 KOREA: Tangled by CONSTITUTIONAL law. David Maxwell, vice president of the Center for Asia Pacific Strategy, on the latest in South Korea, including this: https://www.atlanticcouncil.org/blogs/new-atlanticist/the-global-ripple-effects-of-south-koreas-political-turmoil/ 1230-1245 CANADA: TRUDEAU  DISDAINED: What about PRC interference? Charles Burton, senior fellow at Sinopsis, on this: https://www.bbc.com/news/articles/c5y49ym6em3o   1245-100 am DRONES; China, Iran, suspects.  Blaine Holt, retired Air Force general who served as deputy military representative to NATO, on the latest on the drones.

The John Batchelor Show
#MARS: Rovers relentless. Bob Zimmerman BehindtheBlack.com

The John Batchelor Show

Play Episode Listen Later Dec 19, 2024 4:45


#MARS: Rovers relentless. Bob Zimmerman BehindtheBlack.com 1930

SpaceTime with Stuart Gary | Astronomy, Space & Science News
Venus' Uninhabitable Truth, Mars Rover's New Frontier, and Quantum Sensors in Space: S27E150

SpaceTime with Stuart Gary | Astronomy, Space & Science News

Play Episode Listen Later Dec 13, 2024 25:24


SpaceTime Series 27 Episode 150*Venus: A Lifeless WorldNew research has debunked the long-standing theory that Venusmight have once been habitable. Despite being Earth's sister planet, a study of its atmospheric chemistry reveals that Venus has always been too dry to support oceans, making it an inhospitable world throughout its history. These findings, published in Nature Astronomy, have significant implications for the search for life on exoplanets, suggesting a focus on more Earth-like candidates.*Mars Perseverance Rover Reaches Jezero Crater RimNASA's Mars Perseverance Rover has successfully reached the rim of Jezero Crater, where it is examining the Picotquino region. This area could provide insights into ancient geological processes on Mars, potentially revealing clues about the planet's past climate and the impact that formed the crater.*Quantum Sensors in SpaceNASA's Cold Atom Lab aboard the International Space Station has achieved a groundbreaking milestone by using ultra-cold atoms to detect environmental changes in Space. This marks a new era in quantum science, with potential applications in studying planetary compositions and testing fundamental theories of gravity.00:00 New study suggests Venus was never habitable; quantum sensor used in space00:26 New study has shown that the planet Venus was never habitable06:32 NASA's Mars Perseverance Rover has finally reached the rim of Jezero08:56 NASA's Cold Atom Lab has taken another step towards quantum science in space16:33 Permafrost thawing due to climate change could lead to wildfires19:30 New images have emerged of what's reported to be the famed Loch Ness Monster23:41 Space Time podcast features Stuart Gary talking about Bigfoot in America www.spacetimewithstuartgary.comwww.bitesz.com

Mafia Memoirs by Zenware
513 - Brush Ax Toss at Braun Brushes - SEMA

Mafia Memoirs by Zenware

Play Episode Listen Later Nov 7, 2024 5:07


We sit down with Max Cheney to discuss the family legacy of Braun Brushes and the specialty brushes that have been created by Braun for the Mars Rover, Aviation and automotive applications.

Astronomy Daily - The Podcast
S03E187: Betelgeuse's Stellar Sidekick, Mars Rover's Rocky Road, and Cosmic Radio Riddles

Astronomy Daily - The Podcast

Play Episode Listen Later Oct 23, 2024 12:58


Astronomy Daily - The Podcast: S03E187Welcome to Astronomy Daily, your trusted source for the latest and most exciting space and Astronomy news. I'm Anna, and today we're embarking on a cosmic adventure filled with fascinating discoveries and developments from the universe.Highlights:- Betelgeuse's Mysterious Behavior: Explore the latest theory about Betelgeuse, the enigmatic red supergiant star. Scientists suspect it might have a companion, affectionately dubbed "Beetle Buddy," which could explain its recent dimming and brightening. Could this cosmic giant be hiding secrets about its future supernova potential?- Mars Rover's Ascent: Get the latest updates from Mars, where NASA's Perseverance rover is navigating the challenging terrain of the Jezero Crater rim. Discover its scientific endeavors, including capturing stunning images of Mars' moons and studying Martian rocks for clues about the planet's geological history.- Unusual Cosmic Radio Signal: Delve into the mystery of a newly discovered cosmic radio pulse, ASCAP J1935+2148, with an unprecedented cycle of nearly an hour. What could be causing this bizarre behavior, and how might it challenge our understanding of neutron stars and white dwarfs?- NASA's Future Challenges: A recent report highlights critical issues facing NASA, from outdated infrastructure to budget mismatches. Explore the recommendations for rebalancing priorities and the tough decisions that lie ahead for the agency.- James Webb Space Telescope's Discoveries: Uncover groundbreaking observations of ancient quasars by the James Webb Space Telescope. These findings challenge our current models of black hole growth and galaxy formation, revealing surprisingly lonely supermassive black holes in the early universe.For more space news, visit our website at astronomydaily.io. There, you can sign up for our free Daily newsletter, check out our sponsor links for great deals, and catch up on all the latest news with our constantly updating newsfeed. You'll also find all our previous episodes available for listening.Don't forget to follow us on social media. Just search for #AstroDailyPod on Facebook, X, YouTube, Tumblr, and TikTok to stay connected with us between episodes.Thank you for tuning in. This is Anna signing off. Until next time, keep looking up and stay curious about the wonders of our universe.Sponsor Links:NordVPN - www.bitesz.com/nordvpn - currently Up to 74% off + 3 extra monthsOld Glory - www.bitesz.com/oldglory Sport and Entertainment Merch. Over 100,000 items in stockProton Mail - www.bitesz.com/protonmail Secure email that protects your privacyMalwarebytes - www.bitesz.com/malwarebytes Premium protection for you and all your devices!Become a supporter of this podcast: https://www.spreaker.com/podcast/astronomy-daily-the-podcast--5648921/support.

SpaceTime with Stuart Gary | Astronomy, Space & Science News
Sun's Fiery Embrace, First Stars' Mystery, and Mars Rover's Triumph

SpaceTime with Stuart Gary | Astronomy, Space & Science News

Play Episode Listen Later Oct 17, 2024 23:27


SpaceTime Series 27 Episode 126*NASA's Parker Solar Probe Completes 21st Philip of the SunNASA's Parker Solar Probe has achieved its 21st close encounter with the Sun, matching its previous distance and speed records. The spacecraft swooped to within 7.26 million kilometers of the solar surface at a record speed of 635,300 km/h. This flyby sets up the probe for its final closest approaches, with its orbit shaped by a Venus gravity assist. The mission, launched in 2018, aims to study the Sun's corona and the solar wind, unraveling the mysteries of solar phenomena that impact the solar system.*Webb Space Telescope Finds Potential Missing Link to First StarsAstronomers using NASA's Webb Space Telescope have identified a galaxy with an unusual light signature that could be a missing link in galactic evolution. The galaxy, found approximately a billion years after the Big Bang, features gas outshining its stars, possibly due to massive, hot stars. This discovery offers insights into the transition from the universe's first stars to more familiar galaxies, providing a glimpse into the early cosmic environment.*Perseverance Rover's Key Science Instrument RobertNASA's Perseverance rover on Mars has regained the use of its critical Sherlock instrument after a six-month effort. The spectroscope, crucial for detecting organics and assessing habitability, had malfunctioned in January. The successful repair allows the rover to continue its mission of analyzing Martian rocks and soil for signs of past life and understanding the planet's geological history.The Science RobertA new study suggests that caffeine consumption may improve heart health by aiding vascular growth. Another study reveals increasing plant cover in Antarctica, linked to climate change. Research highlights how people often form opinions without sufficient information, contributing to conflicts. Lastly, a study confirms that astrologers perform no better than chance in predicting character or future events.00:00:00 - This is spacetime series 27, episode 126, for broadcast on 18 October 202400:00:30 - NASA's Parker solar probe completes 21st close encounter with the sun00:03:08 - The Parker solar probe is touching the sun for the first time00:08:32 - Galaxy with unusual light signature attributed to gas outshining stars00:12:00 - NASA scientists have successfully brought a key science instrument back online on Mars00:14:51 - A new study has shown that consuming more caffeine may improve your heart health00:17:01 - New study shows people are biased to assume they know enough about situationswww.spacetimewithstuartgary.comwww.bitesz.com

Startup Project
#84 From Mars Rover to Starups: Ex-aws exec khawaja shams on the art of product-market fit in b2b cloud

Startup Project

Play Episode Listen Later Oct 15, 2024 49:57


In this episode of Startup Project, we chat with Khawaja Shams, Co-founder and CEO of Momento, a serverless caching and messaging service built for interactive applications at scale. Host: Nataraj (Investor at Incisive VC, angel investor, and Senior Product Manager) Guest: Khawaja Shams (Co-founder and CEO of Momento) Website: ⁠Momento Website⁠ LinkedIn: ⁠Nataraj's LinkedIn⁠ | ⁠Khawaja's LinkedIn⁠ [0:00 - 2:00] Khawaja shares his incredible journey—from working on image processing for Mars rovers and communications for interplanetary missions at NASA to building crucial infrastructure at Amazon Web Services (AWS) and ultimately starting Momento. [2:00 - 6:00] Khawaja provides an in-depth look at his early days at NASA, where he was inspired by the company's mission and the potential of cloud computing. He discusses how he prototyped using public datasets on his personal credit card and the challenges of onboarding Amazon as a vendor in the early days of AWS. [6:00 - 10:00] We discuss Khawaja's experience at Amazon, where he witnessed the company's rapid growth and customer obsession firsthand. He details his roles in AWS product engineering and leading key teams, including DynamoDB and Elemental Technologies. [10:00 - 16:00] Khawaja talks about the inspiration behind Momento and how the need for a better caching solution for interactive applications became clear. He explains how Momento addresses the pain points of traditional caching solutions and simplifies development for users. [16:00 - 20:00] We dive deeper into Momento's target customer base and the importance of focusing on verticals like media, gaming, and fintech. Khawaja shares valuable insights on identifying the right customers and building strong design partnerships. [20:00 - 25:00] Khawaja discusses product-market fit and how Momento validated its solution through numerous successful customers. He emphasizes the need for coherence in customer asks and how that provides confidence in the product's direction. [25:00 - 30:00] We talk about B2B growth and marketing strategies, specifically how Momento leverages its existing customer base and focuses on finding similar companies. Khawaja stresses the importance of operational excellence and customer obsession in building trust and advocacy. [30:00 - 35:00] Khawaja shares his thoughts on Amazon's leadership principles and how Momento has cultivated its own unique culture focused on customer centricity and psychological safety. [35:00 - 40:00] We explore the challenges of attracting top talent in a startup environment. Khawaja emphasizes the importance of finding a team you enjoy working with and tackling a problem you believe in. [40:00 - 45:00] Khawaja shares his current consumption habits, including his favorite books and podcasts. He also highlights the importance of mentorship and staying connected with people you admire. [45:00 - 50:00] Khawaja discusses the importance of focus in a startup environment and how prioritizing a few key goals can lead to greater success. [50:00 - 55:00] We finish with a discussion about AI and how Momento plays a crucial role in enabling interactive applications powered by real-time data. #Startup #TechPodcast #Serverless #CloudComputing #AWS #InteractiveApps #B2BMarketing #Entrepreneurship #Leadership #AI #Fintech #MediaTech #GamingTech #ProductMarketFit #Caching #CustomerObsession #FoundersJourney

5 Good News Stories
Beetlejuice Fanta, Beetlejuice Fanta, Helene, Helene, Helene

5 Good News Stories

Play Episode Listen Later Oct 11, 2024 4:47


Johnny Mac shares five uplifting stories: Students excavating a Gaulish village in France find a 200-year-old message in a bottle, the world's largest cheesecake sets a new record at the Cream Cheese Festival in New York, NASA's Perseverance Mars rover discovers a unique rock formation, Augusta National Golf Club and Dolly Parton commit substantial funds for Hurricane Helene relief, and Fanta introduces a controversial Beetlejuice-inspired Halloween flavor. The script concludes with a humorous mention of Beetlejuice, encouraging listeners to try a commercial-free subscription for more content.00:00 Introduction and Archaeological Discovery01:02 World's Largest Cheesecake02:12 Mars Rover's Zebra Rock02:43 Hurricane Helene Relief Efforts from Dolly Parton03:37 Fanta's Questionable Halloween FlavorUnlock an ad-free podcast experience with Caloroga Shark Media! Get all our shows on any player you love, hassle free! For Apple users, hit the banner on your Apple podcasts app. For Spotify or other players, visit caloroga.com/plus. No plug-ins needed!  You also get the other shows on the network ad-free!  $4.99, a no brainer. This podcast supports Podcasting 2.0 if you'd like to support the show via value for value and stream some sats!

All TWiT.tv Shows (MP3)
This Week in Space 130: Dogs on Mars, Snakes on the Moon

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 27, 2024 69:49


NASA's planetary exploration program is in trouble. The Mars Sample Return program is verging on cancellation, and the VIPER mission to the moon already has been. Both are critical precursors to human exploration of these places, as Dr. Jim Bell of Arizona State University will tell us. We need to know more about the surface of Mars--with direct, in-the-lab studies of Mars rocks--and we certainly need to understand where the volatiles--another name for water and other resources of value--are on the moon. And, if you're the US government, you'd like to do both before China does--which is likely not far off. Does it matter who achieves these things first? And specific to the US, what role might private companies and individuals play in the drama? Headlines: SpaceX's Crew-9 mission is set to launch two astronauts to the International Space Station on Saturday, September 28th, with the primary objective of bringing back the Starliner astronauts who have been on the station for an extended period. A Seattle-based company, Iradian Aerospace, has unveiled plans for a new reusable orbital spaceplane that will utilize a revolutionary two-mile-long sled launch system. Earth is set to temporarily capture a small asteroid, 2024 PT5, which will remain in close proximity to our planet from September 29th to November 25th, providing scientists with an opportunity to study a near-Earth asteroid up close as it performs a de-facto flyby. Main Topic - Discussion with Dr. Jim Bell: Jim Bell discusses his early fascination with space exploration, inspired by the Apollo missions and Carl Sagan's acclaimed 1980s series "Cosmos," which led him to pursue a career in planetary science. The decadal survey process is explained, highlighting how it helps align the scientific community's priorities with NASA's mission planning and funding decisions. The challenges faced by the Mars Sample Return mission are discussed, with Jim expressing optimism that NASA will find a way to overcome the current budgetary hurdles and complete this groundbreaking mission. The cancellation of the VIPER lunar rover mission is addressed, with the hosts and guest emphasizing the importance of this mission for future human exploration of the Moon and the need for more transparency in NASA's decision-making process. Jim shares his perspective on the increasing involvement of commercial space companies in planetary exploration, stressing the importance of developing sustainable business models to ensure the long-term viability of these ventures. The conversation touches on the balance between NASA's priorities, such as the Artemis program, and the funding allocated to robotic scientific missions, with Jim highlighting the need for better communication and collaboration between the human spaceflight and robotic exploration divisions of NASA. Hosts: Rod Pyle and Tariq Malik Guest: Jim Bell Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

This Week in Space (Audio)
TWiS 130: Dogs on Mars, Snakes on the Moon - Mars Sample Return and VIPER

This Week in Space (Audio)

Play Episode Listen Later Sep 27, 2024 69:49


NASA's planetary exploration program is in trouble. The Mars Sample Return program is verging on cancellation, and the VIPER mission to the moon already has been. Both are critical precursors to human exploration of these places, as Dr. Jim Bell of Arizona State University will tell us. We need to know more about the surface of Mars--with direct, in-the-lab studies of Mars rocks--and we certainly need to understand where the volatiles--another name for water and other resources of value--are on the moon. And, if you're the US government, you'd like to do both before China does--which is likely not far off. Does it matter who achieves these things first? And specific to the US, what role might private companies and individuals play in the drama? Headlines: SpaceX's Crew-9 mission is set to launch two astronauts to the International Space Station on Saturday, September 28th, with the primary objective of bringing back the Starliner astronauts who have been on the station for an extended period. A Seattle-based company, Iradian Aerospace, has unveiled plans for a new reusable orbital spaceplane that will utilize a revolutionary two-mile-long sled launch system. Earth is set to temporarily capture a small asteroid, 2024 PT5, which will remain in close proximity to our planet from September 29th to November 25th, providing scientists with an opportunity to study a near-Earth asteroid up close as it performs a de-facto flyby. Main Topic - Discussion with Dr. Jim Bell: Jim Bell discusses his early fascination with space exploration, inspired by the Apollo missions and Carl Sagan's acclaimed 1980s series "Cosmos," which led him to pursue a career in planetary science. The decadal survey process is explained, highlighting how it helps align the scientific community's priorities with NASA's mission planning and funding decisions. The challenges faced by the Mars Sample Return mission are discussed, with Jim expressing optimism that NASA will find a way to overcome the current budgetary hurdles and complete this groundbreaking mission. The cancellation of the VIPER lunar rover mission is addressed, with the hosts and guest emphasizing the importance of this mission for future human exploration of the Moon and the need for more transparency in NASA's decision-making process. Jim shares his perspective on the increasing involvement of commercial space companies in planetary exploration, stressing the importance of developing sustainable business models to ensure the long-term viability of these ventures. The conversation touches on the balance between NASA's priorities, such as the Artemis program, and the funding allocated to robotic scientific missions, with Jim highlighting the need for better communication and collaboration between the human spaceflight and robotic exploration divisions of NASA. Hosts: Rod Pyle and Tariq Malik Guest: Jim Bell Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

All TWiT.tv Shows (Video LO)
This Week in Space 130: Dogs on Mars, Snakes on the Moon

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 27, 2024 69:49 Transcription Available


NASA's planetary exploration program is in trouble. The Mars Sample Return program is verging on cancellation, and the VIPER mission to the moon already has been. Both are critical precursors to human exploration of these places, as Dr. Jim Bell of Arizona State University will tell us. We need to know more about the surface of Mars--with direct, in-the-lab studies of Mars rocks--and we certainly need to understand where the volatiles--another name for water and other resources of value--are on the moon. And, if you're the US government, you'd like to do both before China does--which is likely not far off. Does it matter who achieves these things first? And specific to the US, what role might private companies and individuals play in the drama? Headlines: SpaceX's Crew-9 mission is set to launch two astronauts to the International Space Station on Saturday, September 28th, with the primary objective of bringing back the Starliner astronauts who have been on the station for an extended period. A Seattle-based company, Iradian Aerospace, has unveiled plans for a new reusable orbital spaceplane that will utilize a revolutionary two-mile-long sled launch system. Earth is set to temporarily capture a small asteroid, 2024 PT5, which will remain in close proximity to our planet from September 29th to November 25th, providing scientists with an opportunity to study a near-Earth asteroid up close as it performs a de-facto flyby. Main Topic - Discussion with Dr. Jim Bell: Jim Bell discusses his early fascination with space exploration, inspired by the Apollo missions and Carl Sagan's acclaimed 1980s series "Cosmos," which led him to pursue a career in planetary science. The decadal survey process is explained, highlighting how it helps align the scientific community's priorities with NASA's mission planning and funding decisions. The challenges faced by the Mars Sample Return mission are discussed, with Jim expressing optimism that NASA will find a way to overcome the current budgetary hurdles and complete this groundbreaking mission. The cancellation of the VIPER lunar rover mission is addressed, with the hosts and guest emphasizing the importance of this mission for future human exploration of the Moon and the need for more transparency in NASA's decision-making process. Jim shares his perspective on the increasing involvement of commercial space companies in planetary exploration, stressing the importance of developing sustainable business models to ensure the long-term viability of these ventures. The conversation touches on the balance between NASA's priorities, such as the Artemis program, and the funding allocated to robotic scientific missions, with Jim highlighting the need for better communication and collaboration between the human spaceflight and robotic exploration divisions of NASA. Hosts: Rod Pyle and Tariq Malik Guest: Jim Bell Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

This Week in Space (Video)
TWiS 130: Dogs on Mars, Snakes on the Moon - Mars Sample Return and VIPER

This Week in Space (Video)

Play Episode Listen Later Sep 27, 2024 69:49


NASA's planetary exploration program is in trouble. The Mars Sample Return program is verging on cancellation, and the VIPER mission to the moon already has been. Both are critical precursors to human exploration of these places, as Dr. Jim Bell of Arizona State University will tell us. We need to know more about the surface of Mars--with direct, in-the-lab studies of Mars rocks--and we certainly need to understand where the volatiles--another name for water and other resources of value--are on the moon. And, if you're the US government, you'd like to do both before China does--which is likely not far off. Does it matter who achieves these things first? And specific to the US, what role might private companies and individuals play in the drama? Headlines: SpaceX's Crew-9 mission is set to launch two astronauts to the International Space Station on Saturday, September 28th, with the primary objective of bringing back the Starliner astronauts who have been on the station for an extended period. A Seattle-based company, Iradian Aerospace, has unveiled plans for a new reusable orbital spaceplane that will utilize a revolutionary two-mile-long sled launch system. Earth is set to temporarily capture a small asteroid, 2024 PT5, which will remain in close proximity to our planet from September 29th to November 25th, providing scientists with an opportunity to study a near-Earth asteroid up close as it performs a de-facto flyby. Main Topic - Discussion with Dr. Jim Bell: Jim Bell discusses his early fascination with space exploration, inspired by the Apollo missions and Carl Sagan's acclaimed 1980s series "Cosmos," which led him to pursue a career in planetary science. The decadal survey process is explained, highlighting how it helps align the scientific community's priorities with NASA's mission planning and funding decisions. The challenges faced by the Mars Sample Return mission are discussed, with Jim expressing optimism that NASA will find a way to overcome the current budgetary hurdles and complete this groundbreaking mission. The cancellation of the VIPER lunar rover mission is addressed, with the hosts and guest emphasizing the importance of this mission for future human exploration of the Moon and the need for more transparency in NASA's decision-making process. Jim shares his perspective on the increasing involvement of commercial space companies in planetary exploration, stressing the importance of developing sustainable business models to ensure the long-term viability of these ventures. The conversation touches on the balance between NASA's priorities, such as the Artemis program, and the funding allocated to robotic scientific missions, with Jim highlighting the need for better communication and collaboration between the human spaceflight and robotic exploration divisions of NASA. Hosts: Rod Pyle and Tariq Malik Guest: Jim Bell Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

Total Information AM Weekend
Perseverance and the Legacy of Mars Rovers: A Journey Beyond Technology

Total Information AM Weekend

Play Episode Listen Later Aug 18, 2024 6:27


Join host Scott Jagow as we explore the incredible story of NASA's Mars rovers, from the groundbreaking missions of Spirit and Opportunity to the ongoing adventures of Perseverance. Learn about the technological feats, emotional connections, and the profound legacy left behind on the Red Planet, including a special message from Carl Sagan that could one day be discovered by future explorers.

Ever Wonder? from the California Science Center
REBROADCAST...who drives a Mars rover? (with Hallie Abarca)

Ever Wonder? from the California Science Center

Play Episode Listen Later Jul 31, 2024 23:29


When this episode airs, NASA will be just one day away from landing another rover on Mars. On February 18, the Perseverance rover will reach the surface of the Red Planet, capping off a journey that started with a rocket launch last July.In an earlier episode, we talked with Matthew Frost about Perseverance's robot arm, and how it works to collect samples from the Martian surface. But that robot arm becomes a lot more useful when you can drive it around Mars. And that takes a whole team of dedicated rover drivers back here on Earth.Do you ever wonder who drives a Mars rover?We were lucky to chat with Hallie Abarca, a former Mars rover driver and software engineer on the Perseverance rover at NASA JPL. She talks about what it was like to drive other Mars rovers, working on “Mars time,” and a new JPL website where you can virtually drive across the surface of Mars from your home.Have a question you've been wondering about? Send an email to everwonder@californiasciencecenter.org to tell us what you'd like to hear in future episodes.Follow us on Twitter (@casciencecenter), Instagram (@californiasciencecenter), and Facebook (@californiasciencecenter).Support the Show.

EXOPOLITICS TODAY with Dr. Michael Salla
What does Mars Rover, Nazca Mummies, Cloned Children, Jimmy Kimmel and Kamala Harris have in common?

EXOPOLITICS TODAY with Dr. Michael Salla

Play Episode Listen Later Jul 27, 2024 50:17


Exopolitics Today Week in Review with Dr Michael Salla – July 27, 2024  Topics NASA's Mars rover missions are currently searching for evidence of ancient microbial life, and scientists are excited that the Perserverence Rover may have found signs of such life Forensic analysis shows that Maria, one of the Nazca mummies, is at least 1000 years old and is part of Earth's genetic family tree. Liberation Times published an excellent article about key political figures in the US and where they stand on the UFO issue. Secret Remote Viewing Program Used Cloned Children to Enhance Pyschoenergetic Abilities: Interview with Tony Rodrigues: This interview with Jimmy Kimmel shows Kamela Harris doesn't have the intellectual curiosity in UFOs to say anything more than she is interested. Interview with David Adair about his inventions and upcoming presentation at the Galactic Spiritual Informers Connection China's plan to expand its International Lunar Research Station Initiative to 50 countries, ensures the emergence of rival space blocs that will contest outer space for decades to come. The Ukrainian Army's beachhead in the town of Krynky fails to secure access to buried space ark. Would Elon Musk would be so eager to mass produce humanoid robots if he knew how AI life forms are regarded by extraterrestrial civilizations in our galaxy? Deep State Plans to Use Bioengineered Clone Armies in False Flag Attacks: Roundtable on Space Arks, Sleeping Giants, ET Assimilation, and Mysteries of Saturn US Border Patrol agent confirms how the Biden Administration has been flooding the US with illegal immigrants. Translation of US Army Insider Missions into Portuguese Interview with JP on a Popular Brazilian Podcast Today Webinar: Faking a Cryptoterrestrial Invasion Twitter Feed: https://twitter.com/michaelsalla --- Support this podcast: https://podcasters.spotify.com/pod/show/exopoliticstoday/support

The West Live Podcast
Mars rover makes groundbreaking find… By accident

The West Live Podcast

Play Episode Listen Later Jul 22, 2024 1:37


See omnystudio.com/listener for privacy information.

The West Live Podcast
Biden quits race, Mars rover's historic find & F1 drama

The West Live Podcast

Play Episode Listen Later Jul 22, 2024 22:25


Joe Biden pulls the pin on his presidential campaign but what happens now? Albo and Dutton thank Biden for AUKUS. An Australian drone defence company is capitalising on global strife. NASA's Mars rover makes a groundbreaking discovery by accident. And we unpack a huge weekend of sport.See omnystudio.com/listener for privacy information.

Who's That Girl? A New Girl Podcast
S4 E13 - Coming Out

Who's That Girl? A New Girl Podcast

Play Episode Listen Later Jul 15, 2024 69:34


This podcast covers New Girl Season 4, Episode 13 which originally aired on Jan 13, 2015 and was written by Sophia Lear and directed by Bill Purple. Here's a quick recap of the episode:In this episode, Coach lost his status as the guy people want to sleep with, so he encourages Jess and Ryan to come out about their relationship at school. Schmidt is dealing with an ulcer which Nick & Kai try to help him with. This episode got a 8/10 rating from both Kritika and Kelly; Kritika's favorite character was Winston and Kelly's favorite was Coach.While not discussed in the podcast, we noted other references in this episode including:Da Vinci - Schmidt agreed that Nick has always been physically lazy, but went on to say that mentally he was like “Da Vinci in tie-dye.” Dennis Rodman - Nick felt that Schmidt lived by his own rules in college and said he was like “a fat, Jewish Dennis Rodman.”Mars Rover - Ryan's field trip idea was to let the students talk to real astronauts and operate the Mars Rover.Additionally we mentioned in our podcast episode the “Old Girl” spoof with Helen Slayton-Hughes. You can watch it here. Thanks for listening and stay tuned for Episode 14! Music: "Hotshot” by scottholmesmusic.comFollow us on Twitter, Instagram or email us at whosthatgirlpod@gmail.com!Website: https://smallscreenchatter.com/

SpaceTime with Stuart Gary | Astronomy, Space & Science News
S27E81: Jupiter's Lava Lakes, Mars Rover's Ancient Riverbed, and Space Tourism Health Risks

SpaceTime with Stuart Gary | Astronomy, Space & Science News

Play Episode Listen Later Jul 5, 2024 41:39


Join us for SpaceTime Series 27 Episode 81, where we delve into the latest discoveries and advancements in space exploration.First, new observations from NASA's Juno spacecraft reveal that Jupiter's volcanic moon Io is covered in lakes of molten lava. These findings, published in Communications Earth and Environments, provide a fuller picture of Io's extensive volcanic activity and offer new insights into the volcanic processes at work on this ancient, violent world. Io, slightly larger than Earth's moon, is the most volcanically active world in our solar system due to the gravitational forces from its neighboring Jovian moons and Jupiter itself. Juno's recent flybys have captured high-resolution infrared images showing bright rings surrounding numerous hotspots, indicating that much of Io's surface is covered in lava lakes with caldera-like features.Next, NASA's Mars Perseverance rover has crossed an ancient Martian riverbed in the Jezero Crater, reaching the Bright Angel geological site earlier than expected. This route provided a treasure trove of geological features, including rocks with diverse textures and compositions. Perseverance's exploration of this ancient river channel offers new clues about Mars' geological history and the processes that shaped its surface.Finally, we examine whether space tourism is healthy. New research published in the Journal of the Frontiers of Physiology warns that wealthy, unhealthy individuals venturing into space may face increased health risks, such as pulmonary edema, due to the effects of microgravity on the heart. The study suggests that future space tourists might need to send a digital twin of themselves into virtual space to test their bodies' responses before embarking on the real journey.July Skywatch: What to look for in the night skies throughtout the the month of July with Sky & Telescopes Jonathan Nally.Follow our cosmic conversations on X @stuartgary, Instagram, YouTube, and Facebook. Join us as we unravel the mysteries of the universe, one episode at a time.Sponsor OfferThis episode is proudly supported by NordVPN. Secure your digital journey across the cosmos with a VPN service you can trust. Find your stellar security solution at https://www.bitesz.com/nordvpn.Listen to SpaceTime on your favorite podcast app including Apple Podcasts, Spotify, YouTube Music, or wherever you get your podcasts.Support SpaceTimeBecome a supporter of SpaceTime: https://www.bitesz.com/show/spacetime/support/www.bitesz.com

All TWiT.tv Shows (MP3)
This Week in Space 112: Mars on Pause?

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 24, 2024 67:34


This week we've invited JPL's Chief Engineer Emeritus, Rob Manning, back to discuss Mars exploration and, in particular, Mars Sample Return. As we discussed in episode 107, that project is in a bit of trouble. Rob was the Chief Engineer of every Mars rover up through Perseverance and the overall Chief Engineer on Perseverance, and he has some unique insights on how we have explored Mars, why it matters, and what the future holds... especially with regard to returning samples to Earth. Join us! Headlines: NASA held a press conference to explain the latest delays with Boeing's Starliner spacecraft, which stem from issues with a helium leak and concerns about the reaction control thrusters that could lead to a loss of redundancy during reentry The European Space Agency's Euclid Space Telescope returned its first science images, providing stunning new views of galaxies never seen in such detail before to help unlock the mysteries of dark matter and dark energy Main Topic - Mars Exploration and Sample Return: Rob Manning recounts his extensive experience with Mars exploration at JPL, from the Sojourner rover and Pathfinder lander in the 90s to the currently operating Curiosity and Perseverance rovers Curiosity confirmed the past presence of water on Mars, while Perseverance is collecting carefully selected rock samples to eventually be returned to Earth The Mars Sample Return mission would bring pristine samples back to Earth for in-depth study, but is an extremely complex and costly endeavor facing budget challenges and potential delays Rob explains why returning samples is so critical - context is key and current meteorite samples have been altered by their journey to Earth, whereas carefully selected samples could reveal much more about Mars' history and potential for life Challenges for Mars Sample Return include the large size of the lander, the need for new parachute and guidance technologies, and planetary protection requirements to prevent contaminating Earth NASA currently has no plans for additional Mars missions beyond sample return, and faces a potential loss of institutional knowledge as a "quiet period" approaches, highlighting the need to maintain momentum in Mars exploration Hosts: Rod Pyle and Tariq Malik Guest: Rob Manning Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

This Week in Space (Audio)
TWiS 112: Mars on Pause? - With JPL Chief Engineer Emeritus Rob Manning

This Week in Space (Audio)

Play Episode Listen Later May 24, 2024 67:34 Transcription Available


This week we've invited JPL's Chief Engineer Emeritus, Rob Manning, back to discuss Mars exploration and, in particular, Mars Sample Return. As we discussed in episode 107, that project is in a bit of trouble. Rob was the Chief Engineer of every Mars rover up through Perseverance and the overall Chief Engineer on Perseverance, and he has some unique insights on how we have explored Mars, why it matters, and what the future holds... especially with regard to returning samples to Earth. Join us! Headlines: NASA held a press conference to explain the latest delays with Boeing's Starliner spacecraft, which stem from issues with a helium leak and concerns about the reaction control thrusters that could lead to a loss of redundancy during reentry The European Space Agency's Euclid Space Telescope returned its first science images, providing stunning new views of galaxies never seen in such detail before to help unlock the mysteries of dark matter and dark energy Main Topic - Mars Exploration and Sample Return: Rob Manning recounts his extensive experience with Mars exploration at JPL, from the Sojourner rover and Pathfinder lander in the 90s to the currently operating Curiosity and Perseverance rovers Curiosity confirmed the past presence of water on Mars, while Perseverance is collecting carefully selected rock samples to eventually be returned to Earth The Mars Sample Return mission would bring pristine samples back to Earth for in-depth study, but is an extremely complex and costly endeavor facing budget challenges and potential delays Rob explains why returning samples is so critical - context is key and current meteorite samples have been altered by their journey to Earth, whereas carefully selected samples could reveal much more about Mars' history and potential for life Challenges for Mars Sample Return include the large size of the lander, the need for new parachute and guidance technologies, and planetary protection requirements to prevent contaminating Earth NASA currently has no plans for additional Mars missions beyond sample return, and faces a potential loss of institutional knowledge as a "quiet period" approaches, highlighting the need to maintain momentum in Mars exploration Hosts: Rod Pyle and Tariq Malik Guest: Rob Manning Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

All TWiT.tv Shows (Video LO)
This Week in Space 112: Mars on Pause?

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 24, 2024 67:34


This week we've invited JPL's Chief Engineer Emeritus, Rob Manning, back to discuss Mars exploration and, in particular, Mars Sample Return. As we discussed in episode 107, that project is in a bit of trouble. Rob was the Chief Engineer of every Mars rover up through Perseverance and the overall Chief Engineer on Perseverance, and he has some unique insights on how we have explored Mars, why it matters, and what the future holds... especially with regard to returning samples to Earth. Join us! Headlines: NASA held a press conference to explain the latest delays with Boeing's Starliner spacecraft, which stem from issues with a helium leak and concerns about the reaction control thrusters that could lead to a loss of redundancy during reentry The European Space Agency's Euclid Space Telescope returned its first science images, providing stunning new views of galaxies never seen in such detail before to help unlock the mysteries of dark matter and dark energy Main Topic - Mars Exploration and Sample Return: Rob Manning recounts his extensive experience with Mars exploration at JPL, from the Sojourner rover and Pathfinder lander in the 90s to the currently operating Curiosity and Perseverance rovers Curiosity confirmed the past presence of water on Mars, while Perseverance is collecting carefully selected rock samples to eventually be returned to Earth The Mars Sample Return mission would bring pristine samples back to Earth for in-depth study, but is an extremely complex and costly endeavor facing budget challenges and potential delays Rob explains why returning samples is so critical - context is key and current meteorite samples have been altered by their journey to Earth, whereas carefully selected samples could reveal much more about Mars' history and potential for life Challenges for Mars Sample Return include the large size of the lander, the need for new parachute and guidance technologies, and planetary protection requirements to prevent contaminating Earth NASA currently has no plans for additional Mars missions beyond sample return, and faces a potential loss of institutional knowledge as a "quiet period" approaches, highlighting the need to maintain momentum in Mars exploration Hosts: Rod Pyle and Tariq Malik Guest: Rob Manning Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

This Week in Space (Video)
TWiS 112: Mars on Pause? - With JPL Chief Engineer Emeritus Rob Manning

This Week in Space (Video)

Play Episode Listen Later May 24, 2024 67:34 Transcription Available


This week we've invited JPL's Chief Engineer Emeritus, Rob Manning, back to discuss Mars exploration and, in particular, Mars Sample Return. As we discussed in episode 107, that project is in a bit of trouble. Rob was the Chief Engineer of every Mars rover up through Perseverance and the overall Chief Engineer on Perseverance, and he has some unique insights on how we have explored Mars, why it matters, and what the future holds... especially with regard to returning samples to Earth. Join us! Headlines: NASA held a press conference to explain the latest delays with Boeing's Starliner spacecraft, which stem from issues with a helium leak and concerns about the reaction control thrusters that could lead to a loss of redundancy during reentry The European Space Agency's Euclid Space Telescope returned its first science images, providing stunning new views of galaxies never seen in such detail before to help unlock the mysteries of dark matter and dark energy Main Topic - Mars Exploration and Sample Return: Rob Manning recounts his extensive experience with Mars exploration at JPL, from the Sojourner rover and Pathfinder lander in the 90s to the currently operating Curiosity and Perseverance rovers Curiosity confirmed the past presence of water on Mars, while Perseverance is collecting carefully selected rock samples to eventually be returned to Earth The Mars Sample Return mission would bring pristine samples back to Earth for in-depth study, but is an extremely complex and costly endeavor facing budget challenges and potential delays Rob explains why returning samples is so critical - context is key and current meteorite samples have been altered by their journey to Earth, whereas carefully selected samples could reveal much more about Mars' history and potential for life Challenges for Mars Sample Return include the large size of the lander, the need for new parachute and guidance technologies, and planetary protection requirements to prevent contaminating Earth NASA currently has no plans for additional Mars missions beyond sample return, and faces a potential loss of institutional knowledge as a "quiet period" approaches, highlighting the need to maintain momentum in Mars exploration Hosts: Rod Pyle and Tariq Malik Guest: Rob Manning Download or subscribe to this show at https://twit.tv/shows/this-week-in-space. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

Elon Musk Pod
ESA's Mars Rover Mission Rescued: Nuclear Tech and NASA Partnership Propel Launch to 2028

Elon Musk Pod

Play Episode Listen Later May 23, 2024 8:11


The European Space Agency's Mars rover mission, Rosalind Franklin, is back on track for a 2028 launch, thanks to a groundbreaking nuclear-powered heating system and a crucial partnership with NASA. After severing ties with Russia, ESA is forging ahead with innovative technology and renewed international collaboration.

The Habit Coach with Ashdin Doctor

Join Ashdin Doctor on the Habit Coach Podcast, where he shares bite-sized, actionable habits for an awesome life. In today's episode, discover how India achieved its remarkable success with the Mars Orbiter Mission (MoM), the challenges of reaching Mars, and the importance of setting clear financial goals inspired by space missions. Learn Ashdin's three key steps to setting effective financial goals and start your journey towards financial success today.See omnystudio.com/listener for privacy information.

My Favorite Mistake
Almost Killed a $500 Million Mars Rover: Chris Lewicki's Lessons Learned

My Favorite Mistake

Play Episode Listen Later May 6, 2024 50:24


My guest for Episode #257 of the My Favorite Mistake podcast is Chris Lewicki, an Astrofuturist, Engineer, and Entrepreneur who is interested in developing strong, thoughtful foundations for the near-future space economy. Episode page with transcript and more He's a multi-time co-founder. He first co-founded and was CEO of Planetary Resources Inc. (PRI), which focused on the prospecting, development, and use of resources found on near-Earth asteroids. (Skip) He helped acquire over $60M in investment and revenue, built a team of 80 extremely talented engineers, scientists, and business and policy leaders, and launched 3 experimental spacecraft to advance the adoption of space resources as a crucial part of humanity's activities in space. Prior to entering the private sector, Chris was a key member of NASA's Mars Exploration Rovers and the Phoenix Mars Lander, serving as Flight Director for the Mars rovers Spirit and Opportunity, and as the Surface Mission Manager for Phoenix. Chris received both bachelor's and master's degrees in aerospace engineering from the University of Arizona. He's the recipient of two NASA Exceptional Achievement Medals and has an asteroid named in his honor: 13609 Lewicki. Chris imparts lessons learned from his early days in NASA's Mars exploration projects, where a potential disaster during a rover test thrust him into the limelight as an emerging leader in the field. His poignant recount of the incident underscores the nuanced details that contribute to the success or failure of any mission and the critical concept of design for test( DFT). Drawing parallels to the broader engineering community, this episode's riveting discussion reveals essential strategies used in this high-stakes industry. The implementation of mistake-proofing tactics, robust system performance to ensure resilience, or ‘poka-yoke', and the introduction of redundancy in spacecraft design all contribute to an airtight spacecraft system. Learn from Chris's profound insights as he unravels the multifaceted considerations that go into ensuring functionality, designing for testability, and anticipating service requirements and testing needs during the initial design phases. Questions and Topics: Was it a connector being reversed??  New and innovative work… – was it a design mistake to not be “designed for test”? Could that have been mistake proofed in some way? It was not Would they have fired you? Did you ask??? Ernie or others?? Took time to be able to tell the story? How long? What response did you get to sharing that story online? Bringing these lessons into the private sector as CEO? How many people have taken you up on your offer to share their failure stories?? MY $500M MARS ROVER MISTAKE: A FAILURE STORY Netflix documentary on the James Webb telescope

Minds Behind Maps
How to Map Mars (to Land Rovers) - Fred Calef III - MBM#66

Minds Behind Maps

Play Episode Listen Later May 1, 2024 75:02


Dr Fred Calef III has the unofficial title of "Keeper of Maps" at NASA JPL, he's the Lead Mapping Specialist for most of JPL's Mars Rover missions, most recently that being Perseverance & Curiosity. But to land -and navigate- a rover, one needs maps, and Fred makes them.Sponsor: Nimbo by KermapTry out Kermap's monthly mosaic viewer Nimbo for yourselfSupport the podcast on PatreonAbout FredTwitterMastodonShownotesNote: Links to books are Amazon Affiliate links. I earn a small commission if you buy any of these books.VICAR Github repo (Video Image Communication & Retrieval)Mars 2020 Rover: Terrain Relative NavigationAiry-o crater7 Minutes to MarsMMGIS (Multi Planet Geospatial Information System)Github RepoMars Rover Location MapBook recommendationsThree Body Problem by Cixin Liu (Affiliate Link)The Martian by Andy Weird (Affiliate Link)Timestamps(00:00) - Introduction(00:48) - Sponsor: Nimbo by Kermap(02:23) - How would you describe yourself?(03:18) - Keeper of the Maps(05:04) - What it takes to map Mars(10:21) - Deciding where to put (0,0)(12:33) - Current accuracy of Mars mapping(14:01) - 150m / pixel: How do you find anything?(18:14) - Rover cameras on the ground(22:39) - Creating detailed maps for the Rover's automation(26:07) - How would we be navigating on Mars if we send people there?(31:20) - Comparing to the early days of car navigation(34:15) - Using a compass on Mars(36:13) - Mapping tools(48:54) - Has every image of Mars been seen by at least 1 person?(53:37) - Mars doesn't change that much(56:45) - More strange difference between Mars & Earth(01:00:53) - Mapping other celestial bodies(01:05:04) - Missions or mapping projects that Fred is looking forward to(01:06:10) - Book/podcast recommendation(01:10:06) - One last question: Mars time(01:13:19) - Support the podcast on PatreonSupport the podcast on PatreonMy TwitterPodcast TwitterRead Previous Issues of the NewsletterEdited by Peter XiongFind more of his work

Science Will Win
Live from SXSW - Scientific Storytelling: The Audio Advantage

Science Will Win

Play Episode Listen Later Apr 18, 2024 52:08


Podcasting has exploded in the last 5 years, with every celebrity, brand, and influencer investing in audio. It's estimated that 144MM people in the U.S. listen to a podcast monthly. With scale and an ability to build audience connections, podcasts have become effective at communicating complex information in a digestible way–making them perfect for sharing science-based stories. Many have heard the term "gene therapy," but how many know what it means? How do you explain the science behind artificial intelligence or designing the Mars Rover? Audio allows us to do this, while enhancing the impact of science communications. Join the Pfizer Podcast team, NASA, UT Austin, and Wonder Media Network as they share their audio expertise and accessible approach to scientific storytelling.Featured experts:Shira Atkins, CRO & Co-Founder, Wonder Media NetworkEllen Gerstein, Head Of Digital Communications, Pfizer Inc.Katie Konans, Audio Storytelling Lead, NASAKristen Wynn, Program Manager, The University of Texas at Austin, Dell Medical School, Livestrong Cancer InstitutesThis episode was recorded live in Austin, TX on Monday March 11 as part of Pfizer's takeover of the South by Southwest podcasting lounge. 

The John Batchelor Show
#MARS: Building the ESA Mars rover. Bob Zimmerman BehindtheBlack.com

The John Batchelor Show

Play Episode Listen Later Apr 11, 2024 3:25


#MARS: Building the ESA Mars rover. Bob Zimmerman BehindtheBlack.com 1999 Mars

Performance Anxiety
Jason Achilles

Performance Anxiety

Play Episode Listen Later Mar 22, 2024 100:34


Today's guest makes me wish I had studied more in school. Meet Jason Achilles. This guy's breadth of talent and interests just blows my mind! He's not just a talented musician but he's also an engineer who has recorded the sounds of Mars from a microphone system he was in charge of on the Mars rover. And apparently this kind of professional diversity runs in the family. By the way, he uses one of the microphones that didn't make it to Mars on the podcast! But the podcast starts off with an admission by Jason about the amount of info he decided to share with me. We talk about his early musical endeavors, including how he started playing solo and improvisationally. So you now know that Jason has worked with NASA's Jet Propulsion Lab, but he's also worked with Jerry Cantrell, Doug Kershaw, Geezer Butler, Dizzy Reed, and a ton of other people. I'm really not sure which of those is more impressive to me. Jason reveals how he started working with NASA, getting involved in the Mars Rover project, and recording exclusively in analog. He obviously gets more creative the more limitations he has! He also reveals some news about a Carnegie Hall performance, putting together a planetarium tour, and thanks to a student at the University of Alabama, Huntsville, he's asked probably the most intelligent question that's ever been asked on this podcast (Thank you Triston Tindell).  Check Jason's music out on Spotify, YouTube, or wherever you get your music. Go to jasonachilles.com for his music as well as Sounds From Mars and more info on upcoming events. He's @jasonachilles on X. We're @PerformanceAnx on the socials. You can keep us going with coffee at ko-fi.com/performanceanxiety or merch at performanceanx.threadless.com. Visiting our sponsors also helps out a lot! So let's check out Jason Achilles on Performance Anxiety on the Pantheon Podcast Network. Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well

Universe Today Podcast
[Q&A] Mars Rovers On The Moon, Satellite Management, Life Near Blue Giants

Universe Today Podcast

Play Episode Listen Later Mar 5, 2024 47:05


Why won't NASA send a copy of Perseverance to the Moon? Can life exist on a planet around a blue giant star? How do satellites stay safe in orbit and don't collide with each other? Answering all these questions and more in this week's Q&A show.

Universe Today Podcast
[Q&A] Mars Rovers On The Moon, Satellite Management, Life Near Blue Giants

Universe Today Podcast

Play Episode Listen Later Mar 5, 2024


Why won't NASA send a copy of Perseverance to the Moon? Can life exist on a planet around a blue giant star? How do satellites stay safe in orbit and don't collide with each other? Answering all these questions and more in this week's Q&A show.

Der KI-Podcast
Fliegen wir mit KI zu den Sternen?

Der KI-Podcast

Play Episode Listen Later Mar 5, 2024 41:37


SpaceX und Elon Musk, Blue Origin und Jeff Bezos, die NASA, Indien, und China: alle wollen mitmischen beim neuen Space Race. Mit von der Partie beim aktuellen Raumfahrtboom ist auch künstliche Intelligenz. Egal ob in selbstfahrenden und -forschenden Marsfahrzeugen oder bei der Analyse gewaltiger Sternendaten: Ohne KI wäre moderne Weltraumforschung undenkbar. Und: es könnte sogar sein, dass die Raumforschung die Entwicklung künstlicher Intelligenz in Zukunft grundlegend beeinflusst. Über die Hosts: Fritz Espenlaub ist freier Tech-Journalist und Moderator, u.a. beim Bayerischen Rundfunk und 1E9. Marie Kilg ist freie Journalistin und Innovationsmanagerin im Deutsche Welle Lab. Zuvor war sie Produkt-Managerin bei Amazon Alexa. In dieser Folge: 00:00 Intro 03:42 Ratespiel: Science Fiction oder echt? 12:16 Wie uns KI hilft, das Weltall zu verstehen 19:10 Wie uns KI helfen kann, den Weltraum zu erkunden - und dabei zu überleben! 26:10 Brauchen wir überhaupt noch Menschen im All? 35:56 Was haben wir diese Woche mit KI gemacht Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Hendrik Loven, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Links und Quellen: KI macht Bild von Schwarzem Loch schärfer: https://noirlab.edu/public/news/noirlab2310/?lang Wie die ESA Satellitendaten mithilfe von KI analysiert: https://www.esa.int/Applications/Observing_the_Earth/Rapid_Action_Coronavirus_Earth_observation_dashboard_now_available Mit Machine Learning auf der Suche nach Technosignaturen: https://www.space.com/machine-learning-seti-technosignatures Wie der Mars Rover "Perseverance” KI nutzt, um Gesteinsproben zu erkennen: https://mars.nasa.gov/resources/26782/perseverances-supercam-uses-aegis-for-the-first-time/ KI designt Ausrüstung für's All: https://www.nasa.gov/science-research/nasa-turns-to-ai-to-design-mission-hardware/ Die KI-Ärzte der Zukunft: https://ifk.uchicago.edu/news/will-virtual-doctors-care-for-future-astronauts-the-promise-and-perils-of-ai-in-space-medicine/ Das Willy Wonka Debakel: https://www.theguardian.com/uk-news/2024/feb/27/glasgow-willy-wonka-experience-slammed-as-farce-as-tickets-refunded AnnesNerdNight: https://www.tiktok.com/@annesnerdnight?lang=de-DE Quarks Daily: https://www.quarks.de/podcast/dailyquarks/ Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!

History in Slow German
#82 The Mars Rover Perseverance Landing

History in Slow German

Play Episode Listen Later Feb 23, 2024 4:18


The Tech Blog Writer Podcast
2791: From NASA to VC: Sailesh Ramakrishnan's Stellar Journey in Tech

The Tech Blog Writer Podcast

Play Episode Listen Later Feb 3, 2024 34:21


How do visionaries navigate the complex pathways from groundbreaking ideas to revolutionary technologies? Today's episode of the Tech Talks Daily Podcast explores this fascinating journey with Sailesh Ramakrishnan, partner at Rocketship.vc. Sailesh's story is not just inspiring; it's a masterclass in the art of blending diverse disciplines to fuel innovation. From his early days dreaming of space exploration, leading him to a pivotal role at NASA, Sailesh has been on a relentless pursuit of knowledge and impact. His academic journey is a tapestry of engineering, construction management, and artificial intelligence, culminating in his work on AI for cancer modeling and developing reasoning capabilities for a robotic assistant for the elderly. But how does one transition from working on Mars Rovers at NASA to diving into the startup world? We delve into Sailesh's decision to leave NASA for the startup scene, his experience co-founding LocBox (later acquired by Square), and his reunion with former colleagues to launch Rocketship.vc. Here, Sailesh uses AI to unearth hidden gems in the global startup ecosystem, moving beyond traditional venture capital models. In our conversation, we uncover the essential elements that Sailesh believes are crucial for startup success: a strong, cohesive team and an unwavering focus. We'll also get his insights on the role of AI in empowering solo founders and the waves of innovation being spurred by the urgent need to address climate change. Check out the Sponsor of Tech Talks Daily. Step into the future of secure managed file transfer with Kiteworks. Visit kiteworks.com to get started. 

The A to Z English Podcast
A to Z This Day in World History | January 4th

The A to Z English Podcast

Play Episode Listen Later Jan 4, 2024 3:41


Here are some historical events that occurred on January 4 throughout world history:1642: King Charles I of England attempts to arrest five members of Parliament, leading to the start of the English Civil War.1847: Samuel Colt sells his first revolvers to the United States government.1896: Utah is admitted as the 45th U.S. state.1959: Luna 1, the first spacecraft to reach the vicinity of the Moon and to orbit the Sun, is launched by the Soviet Union.1965: The British rock band The Who release their debut studio album, "My Generation."1974: President Richard Nixon refuses to hand over tape recordings and documents subpoenaed by the Watergate Special Prosecutor.1999: Former professional wrestler Jesse Ventura is sworn in as governor of Minnesota.2004: NASA's Mars Rover "Spirit" lands on Mars.2010: The Burj Khalifa, the world's tallest building, officially opens in Dubai, United Arab Emirates.These are just a few notable events that happened on January 4. There are, of course, many more events that occurred on this day throughout history.Podcast Website:https://atozenglishpodcast.com/a-to-z-this-day-in-world-history-january-4th/Social Media:WeChat account ID: atozenglishpodcastFacebook Group: https://www.facebook.com/groups/671098974684413/Tik Tok:@atozenglish1Instagram:@atozenglish22Twitter:@atozenglish22A to Z Facebook Page:https://www.facebook.com/theatozenglishpodcastCheck out our You Tube Channel:https://www.youtube.com/channel/UCds7JR-5dbarBfas4Ve4h8ADonate to the show: https://app.redcircle.com/shows/9472af5c-8580-45e1-b0dd-ff211db08a90/donationsRobin and Jack started a new You Tube channel called English Word Master. You can check it out here:https://www.youtube.com/channel/UC2aXaXaMY4P2VhVaEre5w7ABecome a member of Podchaser and leave a positive review!https://www.podchaser.com/podcasts/the-a-to-z-english-podcast-4779670Join our Whatsapp group: https://forms.gle/zKCS8y1t9jwv2KTn7Intro/Outro Music: Daybird by Broke for FreeSupport this podcast at — https://redcircle.com/the-a-to-z-english-podcast/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Laugh It Up Fuzzball
Laugh It Up Fuzzballs (Ep. 369) - We like Ro-bots and we cannot lie

Laugh It Up Fuzzball

Play Episode Listen Later Oct 24, 2023 167:17


Welcome to the place where we get to let our geek flags fly and talk about all things geek. Basically a fuzzy guide to life, the universe, and everything but mostly geek stuff. This is a look into the world of geekdom and some geek news, comics, The Simpsons, Star Wars, and whatever randomness finds its way onto the recording. This level of the podcast is a great episode all about a bunch of robots, androids, automatons, and more that the Marshall, Blue, and I enjoy! If you like V.I.C.I. (Small Wonder), Blinky (Bucky O'Hare), Johnny 5 (Short Circuit), Maeve (Westworld), Robo (Chrono Trigger), the 80's Robot (The Muppets), K.I.T.T. (Knight Rider), Shinichi Mechazawa (Cromartie High School), Marvin the Paranoid Android, WALL-E, BMO (Adventure Time), Good Robot Us-es (Bill & Ted's Bogus Journey), Metalhead (TMNT), KOS-MOS (Xenosaga), Gigolo Joe (A.I.), the T-800 (The Terminator), The Rock Lords (GoBots), Bubo (Clash of the Titans 1981), Grimlock (Transformers), S.H.I.V.A. (X-Men), The Mars Rovers, Linguo (The Simpsons), Lazengann (Gurren Lagann), Trimaxion (Flight of the Navigator), Cobra B.A.T.s (GI Joe), Canti (FLCL), Gypsy, Crow T. Robot, and Tom Servo (MST3K), Bender (Futurama), Megaman, or The Iron Giant, then this is the super-chat for you! We also cover a bunch of honorable mentions and overall just have the usual geeky brodown you can expect when all the hosts get together. Enjoy! Congrats on completing Level 369 of the podcast! Think positive, test negative, stay safe, wash your hands, wear a mask, and good luck out there. Feel free to contact me on Twitter and/or Instagram (@wookieeriot). You can also reach the show by e-mail, laughitupfuzzballpodcast@gmail.com., or by joining the Facebook group (https://www.facebook.com/groups/1879505335626093). I'd love to hear from you. Merch is available at teepublic.com/user/laugh-it-up-fuzzball. Also subscribe to the feed on Apple podcasts, Google podcasts, Stitcher, Breaker, IHeartRadio, RadioPublic, Spotify, or any of the apps which pull from those sources. Go do your thing so I can keep doing mine. If you feel so inclined, drop a positive rating or comment on those apps. Ratings help others find the madness. Tell your friends, geekery is always better with peers. Thank YOU for being a part of this hilarity! There's a plethora of ways to comment about the show and I look forward to seeing your thoughts, comments, and ideas. May the force be with us all, thanks for stopping by, you stay classy, be excellent to each other and party on dudes! TTFN… Wookiee out! --- Send in a voice message: https://podcasters.spotify.com/pod/show/laugh-it-up-fuzzball/message Support this podcast: https://podcasters.spotify.com/pod/show/laugh-it-up-fuzzball/support

Product Talk
EP 330 - Wind River CPO on Navigating Product Leadership in a Changing Landscape

Product Talk

Play Episode Listen Later Sep 22, 2023 37:12


How can product leaders adapt to change in an ever-evolving landscape? In episode #12 of our CPO Rising Series with Products That Count CPO Renee Niemi, Wind River CPO Avijit Sinha discusses the dynamic world of product leadership and transformation. Discover how Wind River transitioned from private equity to a public subsidiary, specializing in embedded and cloud software for critical systems like the Mars Rover and telescopes. Gain valuable insights into the impact of software-defined products across various industries and learn how Wind River's growth strategy and product development adapt to rapidly changing landscapes. Explore their unique approach to customer obsession, diversity, and culture, and uncover the secrets of successful product leadership in a fast-paced world.

Human Capital Innovations (HCI) Podcast
S46E29 - Corporate Social Activism Demystified, with Eric Thomas and Bridgette McAdoo

Human Capital Innovations (HCI) Podcast

Play Episode Listen Later Sep 5, 2023 27:49


In this HCI Podcast episode, Dr. Jonathan H. Westover talks with Eric Thomas and Bridgette McAdoo about how to demystify corporate social activism. Eric Thomas leads the DE&I practice at Genesys. He is charged with developing global programs that deliver progressive diversity and foster an inclusive culture throughout the company. He focuses on programs that aid to attract, develop and promote talent that is representative of the communities the company serves. Eric encourages employees to bring their best selves to work every day. Prior to his current role, Eric served as vice president of Global Delivery, in Genesys Professional Services leading a team of more than 450 employees responsible for worldwide implementation services. Eric held multiple leadership roles at Alcatel-Lucent and Ericsson where he drove employee resource groups focused on the unique challenges African Americans face in corporate settings. Eric co-founded and served on the board of three non-profit organizations that mentored young African American males in underserved cities. Bridgette McAdoo is the VP & Chief Sustainability Officer at Genesys. She is responsible for sustainability as a management approach that holistically optimizes Genesys' economic, social, and environmental impact. In her role leading sustainability, Bridgette drives stakeholder engagement, education, and the evolution of the sustainable strategy and programs across the company. Bridgette has over 20 years of experience in sustainability leadership roles across multiple sectors, including the World Wildlife Fund (WWF), where she most recently led corporate strategy and engagement for WWF's Freshwater and Food goals. She also worked as Global Director of Sustainability for KFC, where she headed all sustainability issues for the brand, internally within Yum! Brands and externally with various sustainability stakeholders, and operations roles that were part of NASA's Space Shuttle and Mars Rover programs. Further explore the topics discussed in this episode with the new HCIConsulting Chatbot: ⁠https://poe.com/HCIConsulting⁠. Check out the ⁠HCI Academy⁠: Courses, Micro-Credentials, and Certificates to Upskill and Reskill for the Future of Work! Check out the LinkedIn ⁠Alchemizing Human Capital⁠ Newsletter. Check out Dr. Westover's book, ⁠The Future Leader⁠. Check out Dr. Westover's book, ⁠'Bluer than Indigo' Leadership⁠. Check out Dr. Westover's book, ⁠The Alchemy of Truly Remarkable Leadership⁠. Check out the latest issue of the ⁠Human Capital Leadership magazine⁠. Each HCI Podcast episode (Program, ID No. 627454) has been approved for 0.50 HR (General) recertification credit hours toward aPHR™, aPHRi™, PHR®, PHRca®, SPHR®, GPHR®, PHRi™ and SPHRi™ recertification through HR Certification Institute® (HRCI®). Each HCI Podcast episode (Program ID: 24-DP529) has been approved for 0.50 HR (General) SHRM Professional Development Credits (PDCs) for SHRM-CP and SHRM-SCPHR recertification through SHRM, as part of the knowledge and competency programs related to the SHRM Body of Applied Skills and Knowledge™ (the SHRM BASK™). Human Capital Innovations has been pre-approved by the ATD Certification Institute to offer educational programs that can be used towards initial eligibility and recertification of the Certified Professional in Talent Development (CPTD) and Associate Professional in Talent Development (APTD) credentials. Each HCI Podcast episode qualifies for a maximum of 0.50 points.

Jim and Them
Toxic Gossip Train - #776 Part 1

Jim and Them

Play Episode Listen Later Jul 1, 2023 101:15


Miranda Sings Allegations: Famous Youtuber, Colleen Ballinger aka Miranda Sings, accusations have resurfaced and we have some catching up to do! Toxic Gossip Train: In a Kevin Spacey esque movie, Colleen Ballinger releases an unhinged response video in the form of a 10 minute ukulele song, LET'S GO We Didn't Start The Fire: Holy shit, did Fallout Boy just release CRINGE!? DEATH PENALITY!, HORRIFIC!, EXECUTIONER!, I'M A VIRGO!, KICK!, DIRT INTERNET!, GAMBLING!, STAKE CASINO!, DRAKE!, CRYPTO!, SLOTS!, CASINO!, LAS VEGAS!, LEGAL!, SWEEPSTAKES!, PART 2 BELLIGERENT!, SCREAMING!, PART 2 JEFF!, TEASER!, JEFF'S OPEN MIC!, MIRANDA SINGS!, COLLEEN BALLINGER!, YOUTUBER!, GROOMING!, ALLEGATIONS!, CONTROVERSY!, ADAM MCINTYRE!, ASS!, SEND PICS!, UNDERWEAR!, JUSTIN ROILAND!, GNARGOYLE!, PERIODS!, GORDON HALE!, PARASOCIAL!, FART JOKE!, SPREAD LEGS!, LILY SINGH!, TRENT BALLINGER!, KORY DESOTO!, WIGGLES!, SPARKLING WIGGLES!, HUMOR!, UKULELE!, APOLOGY!, RESPONSE!, TOXIC GOSSIP TRAIN!, MANIPULATION STATION!, HARASSED!, DAMNING!, KEVIN SPACEY!, GROW AND CHANGE!, FAKE NEWS!, GOSSIP!, RUMORS!, REPUTATION!, BOB DYLAN!, MODERN DAY!, BADLANDS CHUGS!, FRIDGE COLD!, SPRITE!, THE WEEKND!, ABEL!, THE IDOL!, GAY!, PUSSY!, YES MEN!, I'M NOT A GROOMER!, I'M JUST A LOSER!, SOY UN GROOMADOR!, 50 CENT!, KEENAN CAHILL!, DOWN ON ME!, FARTBOARD!, FALLOUT BOY!, IMAGINE DRAGONS!, WE DIDN'T START THE FIRE!, PETE WENTZ!, 89 - 2023!, BILLY JOEL!, ICELAND VOLCANO!, GMO!, STRANGER THINGS!, TIGER KING!, TRUMP IMPEACHED!, ALIENS!, MARS ROVER!, AVATAR!, LARK VOORHIEES!  You can find the videos from this episode at our Discord RIGHT HERE!

Citation Needed
Mars Rovers

Citation Needed

Play Episode Listen Later Jun 21, 2023 39:41


A Mars rover is a motor vehicle designed to travel on the surface of Mars. Rovers have several advantages over stationary landers: they examine more territory, they can be directed to interesting features, they can place themselves in sunny positions to weather winter months, and they can advance the knowledge of how to perform very remote robotic vehicle control. They serve a different purpose than orbital spacecraft like Mars Reconnaissance Orbiter. A more recent development is the Mars helicopter. Our theme song was written and performed by Anna Bosnick. If you'd like to support the show on a per episode basis, you can find our Patreon page here.  Be sure to check our website for more details.

Father Roderick
Recasting Luke Skywalker? Transformers and Barbies; the Lego Mars Rover

Father Roderick

Play Episode Listen Later May 31, 2023 85:09


In this week’s episode of my podcast ‘The Break’: Recasting Luke Skywalker; Trailer reactions: Transformers and the Barbie Movie; Parish renewal; the Lego Mars Rover Perseverance; Size Matters Not review; upgrading old tech.

Father Roderick
Recasting Luke Skywalker? Transformers and Barbies; the Lego Mars Rover

Father Roderick

Play Episode Listen Later May 31, 2023 85:09


In this week's episode of my podcast ‘The Break': Recasting Luke Skywalker; Trailer reactions: Transformers and the Barbie Movie; Parish renewal; the Lego Mars Rover Perseverance; Size Matters Not review; upgrading old tech. ? Like this podcast? Support my work with a small monthly donation via https://Patreon.com/FatherRoderick The post Recasting Luke Skywalker? Transformers and Barbies; the Lego Mars Rover appeared first on Father Roderick.

Here's The Thing with Alec Baldwin
Filmmaker Ryan White Wants You To Eat Your Broccoli

Here's The Thing with Alec Baldwin

Play Episode Listen Later May 30, 2023 36:38


Filmmaker Ryan White has made a dizzying array of unique documentaries, including “The Keepers,” about the unsolved murder of a Catholic nun, “The Case Against 8” about the fight for marriage equality, “Good Night Oppy,” which traces the journey of NASA's Mars Rover and “Assassins,” about the murder of North Korean Supreme Leader Kim Jong-un's half-brother. The Emmy-nominated director's latest project, “Pamela, A Love Story,” is a raw look at the life of 90's bombshell Pamela Anderson. It showcases a more vulnerable side of the actress and re-examines the major life events of the star – from her rise to fame to the infamous, stolen sex tape with her then-husband, Tommy Lee. Alec speaks with Ryan White about what he learned filming with Anderson, the impact the documentary had on her life and how he balances the light and the dark of his projects.See omnystudio.com/listener for privacy information.

Elvis Duran and the Morning Show ON DEMAND
Gandhi's 3 Things: Mars Rover Finds Evidence of Water

Elvis Duran and the Morning Show ON DEMAND

Play Episode Listen Later May 1, 2023 1:49


Texas shooter is on the run, severe weather causes East coast flooding, and China's Mars Rover finds water! See omnystudio.com/listener for privacy information.