Genus of dinosaur (fossil)
POPULARITY
Categories
THE PARK IS NOW OPEN!! Jurassic World Full Reaction Watch Along: / thereelrejects Visit https://huel.com/rejects to get 15% off your order What A Fun Monster Movie! Jurassic World Reaction, Recap, Commentary, Analysis, & Spoiler Review!! Greg Alba and Tara Erickson react to the massive 2015 blockbuster Jurassic World, the fourth film in the Jurassic Park franchise! Starring Chris Pratt (Guardians of the Galaxy, The Super Mario Bros. Movie) as Owen Grady, Bryce Dallas Howard (Spider-Man 3, The Help) as Claire Dearing, Vincent D'Onofrio (Daredevil, Full Metal Jacket) as Hoskins, Ty Simpkins (Iron Man 3, Insidious) as Gray, and Nick Robinson (Love, Simon, Everything, Everything) as Zach, the film brought the park back to life in a massive way. We go deep into the best Jurassic World moments including the Indominus Rex escape, the Velociraptor squad hunting sequence, and the epic T-Rex vs Indominus Rex final battle — some of the most viewed scenes on YouTube like "T-Rex and Blue vs Indominus Rex" (200M+ views), “Mosasaurs Eats Shark” (100M+), and “Indominus Rex Kills Ankylosaurus.” Featured dinosaurs include the fan-favorite Velociraptor, Tyrannosaurus Rex (T-Rex), Mosasaurus, Indominus Rex, Triceratops, Ankylosaurus, Apatosaurus, and Pteranodons. We also cover how Jurassic World fits into the full Jurassic Park franchise timeline, following Jurassic Park (1993), The Lost World: Jurassic Park (1997), Jurassic Park III (2001), Jurassic World (2015), Jurassic World: Fallen Kingdom (2018), and Jurassic World Dominion (2022). Does Jurassic World hold up nearly a decade later? How does the film balance legacy nostalgia with new thrills? Let's find out together in this hilarious, geeky, and emotional revisit. Don't forget to like, comment with your favorite dinosaur, and subscribe for more blockbuster movie reactions! Follow Tara Erickson: Youtube: https://www.youtube.com/@TaraErickson Instagram: https://www.instagram.com/taraerickson/ Twitter: https://twitter.com/thetaraerickson Intense Suspense by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Follow Us On Socials: Instagram: https://www.instagram.com/reelrejects/ Tik-Tok: https://www.tiktok.com/@reelrejects?lang=en Twitter: https://x.com/reelrejects Facebook: https://www.facebook.com/TheReelRejects/ Music Used In Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Happy Alley by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
Let's go girls! As promised, Ella has done a deep dive about VELOCIRAPTORS. Were they fast? Were the feathered? What did they do with their long, long claws? Could they fly? Find out all this and more! We also have a Ready Pet Go from Nikki and Jubilee!Also, please forgive our congested voices, we both happened to be a little sick!Send us YOUR pet stories (Ready, Pet, Go!) at comfortcreatures@maximumfun.org and don't forget to rate, review, and subscribe! Follow us @comfortcreaturespodcast on Instagram! Join us on Discord: https://discord.gg/PFVQXgMYWB
Here's a taster of our new Premium-only story. To hear it in full, please join our Premium Subscription service. Become a PREMIUM SubscriberYou can now enjoy Animal Tales by becoming a Premium Subscriber. This gets you:All episodes in our catalogue advert freeBonus Premium-only episodes (every Friday) which will never be used on the main podcastWe guarantee to use one of your animal suggestions in a storyYou can sign up through Apple Podcasts or through Supercast and there are both monthly and yearly plans available. You can find more Animal Tales at https://www.spreaker.com/show/animal-tales-the-kids-story-podcast
DR. ALAN GRANT RETURNS!! Jurassic Park III Full Reaction Watch Along: / thereelrejects Download the PrizePicks today at https://prizepicks.onelink.me/LME0/RE... & use code REJECTS to get $50 instantly when you play $5! With Jurassic World: Rebirth around the corner, Greg & Tara continue their Jurassic Marathon with their Jurassic Park III Reaction, Recap, Commentary, Analysis, & Spoiler Review!! Join Greg Alba & Tara Erickson as they take flight back to Isla Sorna in Joe Johnston's 2001 sci-fi adventure Jurassic Park III. When divorced couple Dr. Alan Grant (Sam Neill, The Lost World: Jurassic Park, Peaky Blinders) accepts a mysterious aerial survey, they're lured into a rescue mission by Paul Kirby (William H. Macy, Fargo, Shameless) and his wife Amanda (Téa Leoni, Deep Impact, Madagascar), only to crash-land on the dino-infested island. Stranded without backup, they must survive against the cunning Spinosaurus—the film's signature apex predator—and evade deadly packs of Velociraptors. Along the way, they're joined by resourceful hunter Billy Brennan (Alessandro Nivola, Face/Off, American Hustle) and reunited with Dr. Ellie Sattler (Laura Dern, Jurassic Park, Big Little Lies), whose quick thinking saves the group from collapsing cliffs and the iconic raptor-in-the-aviary ambush. Don't miss the nail-biting river raft sequence where the Spinosaurus attacks from below, and the lore-expanding finale as Grant and Kirby use a decoy T. rex to outwit their monstrous pursuer. Greg & Tara break down every heart-pounding set-piece—from the aerial crate drop and raptor pen breakout to the jungle-clearing showdown—analyzing how Jurassic Park III balances nonstop thrills with nods to Spielberg's legacy. Follow Tara Erickson: Youtube: https://www.youtube.com/@TaraErickson Instagram: https://www.instagram.com/taraerickson/ Twitter: https://twitter.com/thetaraerickson Intense Suspense by Audionautix is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Follow Us On Socials: Instagram: https://www.instagram.com/reelrejects/ Tik-Tok: https://www.tiktok.com/@reelrejects?lang=en Twitter: https://x.com/reelrejects Facebook: https://www.facebook.com/TheReelRejects/ Music Used In Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Happy Alley by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/... POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
Brother and sister Velociraptors must retrieve their mother's eggs – and quickly!Written especially for this podcast by Alice. If you enjoyed this story, please do leave us a review. And, if you'd like to suggest an animal for a future Animal Tales story, you can do so by emailing podcast@animaltales.uk. We would love to hear from you. Animal Tales Books!Collections of Animal Tales stories are available to buy exclusively at Amazon. Simply search for Animal Tales Short Stories or follow this link: https://www.amazon.co.uk/dp/B0CLJQZ9C9?binding=paperback&ref=dbs_dp_sirpi Become a PREMIUM SubscriberYou can now enjoy Animal Tales by becoming a Premium Subscriber. This gets you:All episodes in our catalogue advert freeBonus Premium-only episodes (one per week) which will never be used on the main podcastWe guarantee to use one of your animal suggestions in a storyYou can sign up through Apple Podcasts or through Supercast and there are both monthly and yearly plans available. Discover a brand new story every Monday, Wednesday and Friday – just for you! You can find more Animal Tales at https://www.spreaker.com/show/animal-tales-the-kids-story-podcast A Note About The AdvertsIn order to allow us to make these stories we offer a premium subscription and run adverts. The adverts are not chosen by us, but played automatically depending on the platform you listen through (Apple Podcasts, Spotify, etc) and the country you live in. The adverts may even be different if you listen to the story twice. We have had a handful of instances where an advert has played that is not suitable for a family audience, despite the podcast clearly being labelled for children. If you're concerned about an advert you hear, please contact the platform you are listening to directly. Spotify, in particular, has proven problematic in the past, for both inappropriate adverts and the volume at which the adverts play. If you find this happening, please let Spotify know via their Facebook customer care page. As creators, we want your child's experience to be a pleasurable one. Running adverts is necessary to allow us to operate, but please do consider the premium subscription service as an alternative – it's advert free.
(image source: https://dinopedia.fandom.com/wiki/Dromaeosaurus) Host Matthew Donald and guest co-host Stephen Curro discuss Dromaeosaurus, the namesake of the dromaeosaur family that are more commonly known as “raptors.” Which means Velociraptor is more the namesake of the family, but I'm talking scientifically! “Uh, actually, they're not raptors, they're dromaeosaurs.” Gee, thanks, Kyle. From the Late Cretaceous, this 7-foot coelurosaurian theropod had a much stronger bite than Velociraptor by a factor of three and could theoretically take down even bigger prey solo than it too. Poor Dromaeosaurus, always upstaged by Velociraptor not because it was better, but because it had the movie deal. Your big break will come too someday, Dromey. Want to further support the show? Sign up to our Patreon for exclusive bonus content at Patreon.com/MatthewDonald. Also, you can get links to follow Matthew Donald and purchase his books at https://linktr.ee/matthewdonald. His latest book, Teslamancer, just released August 27th! And mild spoiler alert... there are kind of dinosaurs in it... mwuahahaha. Hosted on Acast. See acast.com/privacy for more information.
Velociraptor driver Devin Winfield returns to discuss his incredible second season and his chances of competing close to home at World Finals XXIV.
This week Barney & Michael talk Eyes, James Bond, Walkie-Talkies, Gravy again, Michael's Nightmare Top 5 & The Deaf Community... Hosted on Acast. See acast.com/privacy for more information.
The tall grass of Isla Sorna whispers in the afternoon breeze, creating a hypnotic pattern of movement across the vast field before you. You've strayed from the tour path, drawn by the beauty of the untamed landscape. A distant memory of Dr. Alan Grant's stern warnings about raptors and tall grass flickers in the back of your mind, but you dismiss it. After all, the island's security systems and containment procedures have been significantly upgraded since the original incidents.A sound catches your attention – a chirp-like call, almost birdlike in its musicality. You freeze, scanning the grass around you. Nothing. Just the wind and the endless sea of green. The sound comes again, this time from a different direction. And then another from somewhere else entirely. The realization hits you with cold clarity: you're being hunted.Suddenly, the grass parts twenty feet ahead of you, revealing the sleek, muscular form of a Velociraptor. Standing about six feet tall, its skin is a mottled pattern of browns and greens, with distinctive blue stripes running down its sides. The creature's head tilts as its amber eyes lock onto yours with chilling intelligence. You've seen them before, of course – safely behind glass in the park's exhibits. But here, with nothing between you and those curved, six-inch killing claws, the raptor is a different creature entirely.Unlock an ad-free podcast experience with Caloroga Shark Media! Get all our shows on any player you love, hassle free! For Apple users, hit the banner on your Apple podcasts app. For Spotify or other players, visit caloroga.com/plus. No plug-ins needed!Subscribe now for exclusive shows like 'Palace Intrigue,' and get bonus content from Deep Crown (our exclusive Palace Insider!) Or get 'Daily Comedy News,' and '5 Good News Stories' with no commercials! Plans start at $4.99 per month, or save 20% with a yearly plan at $49.99. Join today and help support the show!We now have Merch! FREE SHIPPING! Check out all the products like T-shirts, mugs, bags, jackets and more with logos and slogans from your favorite shows! Did we mention there's free shipping? Get 10% off with code NewMerch10 Go to Caloroga.comGet more info from Caloroga Shark Media and if you have any comments, suggestions, or just want to get in touch our email is info@caloroga.com
Wrapping up our ROTTEN EGGS triple feature, special guest Sarah Clingenpeel from Terror Films joins us as we follow an adorable, but audacious billionaire to a private island in Costa Rica to meet his caravan of cretaceous carnivores in Steven Spielberg's JURASSIC PARK, starring Sam Neill, Laura Dern, Jeff Goldblum, Richard Attenborough, Wayne Knight, Ariana Richards, Joseph Mazzello, Donald Gennaro, Samuel L. Jackson, and Bob Peck.Follow Sarah's new film Dryspell on InstagramFind Sarah on social media - @spookysarahskeletonsCheck out Terror Films here Subscribe on Apple Podcasts, Spotify, and YouTubeFor bonus content and commentaries, check out our PatreonFollow the show on Instagram, TikTok, and FacebookWant to support the show and save 20% on Fangoria? Visit Fangoria and enter PROMO CODE: HOWIMETYOURMONSTER at checkout!Looking for How I Met Your Monster merch? Check out TeePublic for shirts, stickers, mugs, and more!Questions and comments: howimetyourmonsterpodcast@gmail.com
Welcome to the Alfalfa Podcast
Welcome to the Fossil Huntress Podcast. Today on the show we're talking about living dinosaurs—our avian friends, the birds. From the tiniest hummingbird to the towering ostrich, these feathered creatures carry the legacy of the mighty theropods, bridging millions of years of evolution in their lightweight skeletons and high-powered hearts. So join me as we explore both the link between the sweet little chirpers you see in your yard and impressive predators like T. Rex and Velociraptor.For more like this, visit Fossil Huntress HQ www.fossilhuntress.com to connect with the ARCHEA Blog, Facebook and Instagram for more geeky goodness!
In this video, we explore the Utahraptor, the most dangerous raptor of all time! With its impressive size, sharp claws, and pack-hunting behavior, this predator was a true force of nature in the Cretaceous period. Learn about its incredible physical features, hunting strategies, and how it compared to other famous raptors like the Velociraptor. Get ready to uncover why the Utahraptor was one of the fiercest creatures to ever walk the Earth!IF YOU GO ON ONE OF THE TRIPS FOR FOSSIL TRIPS Tell them you hear about them from Prehistoric Life Podcast and they will give you $250 off your tickets.Remember to follow me at Prehistoric_Life_Podcast on instagram and check out the new website PrehistoricLifePodcast.com and on youtube @prehistoric life podcast
(image source: https://www.thoughtco.com/things-to-know-protoceratops-1093796) Host Matthew Donald and guest co-host Stephen Curro discuss Protoceratops, a hardy and stocky fellow with a tubby body and a grumpy attitude. I really relate to this creature. From the Late Cretaceous, this 8-foot ceratopsian lived in the desert with the more famous Velociraptor and the two of them really hit it off. They couldn't keep their claws or beaks off each other. I wonder if anyone's captured their interactions on video or… stone-agram? Dear god, what is this show? Want to further support the show? Sign up to our Patreon for exclusive bonus content at Patreon.com/MatthewDonald. Also, you can get links to follow Matthew Donald and purchase his books at https://linktr.ee/matthewdonald. His latest book, Teslamancer, just released August 27th! And mild spoiler alert... there are kind of dinosaurs in it... mwuahahaha. Hosted on Acast. See acast.com/privacy for more information.
The clubhouse gets Cretaceous this week; your nice hosts have been challenged to create a game about dinosaurs and their feeding habits.PromptMake a game about the feeding habits of dinosaurs or of paleontologists, bonus points if all the dinosaurs are from the same era.Game typeDesign documentPlayer count1RulesCretaceous periodFlowering plants evolved hereEarly mammals here tooVelociraptors too!Different sized dinosaursName: Dinosaurs of North DakotaPlay as big (T-rex), small (dog-sized) and medium (triceratops)Survive a day as each of these creaturesOrder of playMedium first (herbivore)Big next (T-rex)Small last (scavenger)Then Paleontologist discovers (eats chicken with a PLASTIC fork)Where you die determines how preserved the bones are during the paleontology phaseIf you want more preserved bones you have to choose to not live as long (due to how the preservation works)GameplayPlay as the different creatures, the player chooses when that phase of the game endsCan also end after a certain time playedThen you swap to a new creature and play as them, choosing when to endWhen you get back to a creature, time has passed and things are different in the world (partly due to player influence from other creatures)Player knocking down trees might be knocked down in future scenes, eating a lot of plants may make the area sparse in the futureKeep playing as the different creature until enough time has passedAfterwards, play as a paleontologist and rediscover what you've doneRecord what you've done with the knowledge the paleontologist has (and not the player), so it won't actually be accurate to the player's playthroughBased on what the player's done, it can affect what things the paleontologist will be able to say about the state of things after
One might think modern man could easily extinguish a pre-historic bird, but that would be a gross misestimation. Emus a basically modern Velociraptors, and the Australian Army learned that the hard way. Jason Kaye Comedy Free Link To Strider's Stand Up Special Makin' Memories Sources:britannica.com, constitutioncenter.org, smconservancy.org, pacificpalisadeshistory.org
Whenever asked about my favorite dinosaur, it has always been the ankylosaurus. The late cretaceous dinosaur that was likened to an armoured tank. This plant eating dinosaur was probably peaceful but because of its club tail, would have been a formidable foe for predators.
In this asynchronous episode we're interviewing a fellow core developer Yury Selivanov to talk about asyncio's past and future, composable design, immutability, and databases you'd actually like using. We also broke the 2-hour episode barrier!## Timestamps(00:00:00) INTRO(00:01:33) PART 1: INTERVIEW(00:02:27) What drives you?(00:04:47) How do you choose what to work on?(00:08:10) Hyperfocus(00:09:28) Things from Rust that Python could use(00:14:50) Nothing is sacred when you depend on glibc(00:18:47) TypeScript typing is god-tier(00:22:04) Adding async and await to Python(00:34:11) Adding new keywords to the language(00:41:17) Jumping into a new codebase(00:49:22) Any design regrets?(00:58:46) Contextvars(01:10:40) Is the frozenmap PEP happening?(01:19:21) uvloop(01:23:25) What makes Gel lovable?(01:39:57) PART 2: PR OF THE WEEK(01:47:08) Saturday talks at PyCon should be fun(01:50:35) PART 3: WHAT'S GOING ON IN CPYTHON(01:50:47) Ken Jin's tail-call interpreter(01:55:05) Barney Gale's glob.glob() optimization(01:55:43) Brandt's boolean guards to narrow types to values in the JIT(01:56:33) Mark Shannon's stack limits implemented with addresses, not counters(01:58:34) Brandt's removal of _DYNAMIC_EXIT(01:58:53) Mark Shannon's async for branches instrumented(01:59:36) Free-threading changes(01:59:58) Sam Gross' regression tests can now run in --parallel-threads(02:00:34) Tomasz Pytel's thread safety crusade(02:01:01) Xuanteng Huang's __annotations__ race fix(02:01:11) Kumar's per-thread linked lists for tasks(02:02:54) Serhiy's crashes related to PySys_GetObject() fixed(02:03:22) Sam's usage of stack pointers in thread stack traversal(02:03:38) Dino Viehland's lock avoidance during object cleanup(02:04:23) OUTRO
Don't dream of talking Velociraptor's on episode 151 of The Horror Stans Podcast! With the release of the trailer for the upcoming Jurassic World Rebirth we thought it would fun to discuss the underrated (?) Jurassic Park III. Listen as we talk about a film that might be just dumb fun yet knows it, bad 00's haircuts, the mis-use of queen Laura Dern and T-Rex's, awesome flying dino scenes and are we excited for the new Jurassic film? **We accidently say this is from 2000 and not 2001**Please give us a a follow a 5 star rating!Instagram and Twitter: @horrorstansTiktok: @horrorstanspodcastSteve: @screamsteve/@stesta621Matt: @mcavo92
Send us a textIn this episode of Wildly Curious, Katy and Laura dive deep into the prehistoric past to separate dinosaur fact from fiction. From Hollywood myths to groundbreaking discoveries, they explore how our understanding of dinosaurs has evolved over time. Were all dinosaurs cold-blooded? Did they all go extinct at the same time? What even is a dinosaur? Prepare to have your dino knowledge challenged as they break down the latest fossil evidence, debunk common misconceptions, and reveal the fascinating science behind these ancient creatures. Want to see behind the scenes and unedited footage?!
Survival horror meets dinosaurs. Regina and her team are sent to a mysterious research facility where prehistoric creatures are on the loose. Capcom's attempt to mix survival horror with dinosaurs. Released in 1999, Dino Crisis introduced players to a sci-fi thriller where Velociraptors and T-Rexes replaced zombies. Hosted on Acast. See acast.com/privacy for more information.
Survival horror meets dinosaurs. Regina and her team are sent to a mysterious research facility where prehistoric creatures are on the loose. Capcom's attempt to mix survival horror with dinosaurs. Released in 1999, Dino Crisis introduced players to a sci-fi thriller where Velociraptors and T-Rexes replaced zombies. Hosted on Acast. See acast.com/privacy for more information.
Welcome back T&J fam! This week we have a fun show for everyone. Josh starts out by discussing his ability to impersonate the velociraptor. We then discuss Genesis 34 and how much it would have to pay to get circumcised as an adult. Although this leads us down a very unredeemable path, we somehow bring it back to the Charismatic movement. We look at its history, key figures, and criticisms. Enjoy!
Timestamps: 00:00:00 Intro 00:07:00 Wat hebben we gekeken/gelezen/geluisterd? 00:07:20 Severance 00:12:30 Skeleton Crew 00:17:40 On Call 00:21:08 Missing You 00:23:30 WWE Raw on Netflix 00:37:10 The Monkey Trailer 00:42:30 Hot Toys komt met Alien Romulus figurine 00:48:50 Scientifically accurate Velociraptors in Jurassic Park 00:50:40 Helldivers film wil A-list acteurs, maar met een twist 00:52:10 Anaconda Reboot in de maak 00:56:20 The Witcher: Sirens of the Deep Trailer 00:58:40 LEGO nerd bouwt in twee jaar tijd de Battle of Geonosis 01:02:20 Gaan we eindelijk Cal Kestis in live-action Star Wars zien? 01:06:40 Spider-Man duikt op in Japanse manga 01:09:10 Drie namen voor de nieuwe Black Panther 01:15:20 Anthony Mackie deelt hoe hij achter de reveal van Captain America kwam 01:18:30 Hugh Jackman blijft nog 10 jaar lang Wolverine spelen 01:23:30 1923 Trailer ------ Spotify ► https://spoti.fi/2qhR6lr Apple Podcast ► https://apple.co/3GzfqqD Twitter ► https://www.twitter.com/NerdCulturePC Instagram ► https://www.instagram.com/NerdCulture.PC Jelle ► https://www.twitter.com/GKJelle Huey ► https://www.twitter.com/RealHueyBrown Koos ► https://www.twitter.com/jtmooten Outro by Studio Megaane
Mike The Intern talks with Carson Williams who drives the Velociraptor!See omnystudio.com/listener for privacy information.
Meet Doug Quin, sound designer and naturalist who makes field recordings all over the world. Hear what Doug heard when he got up close to emperor penguins, lions and vultures. (R)Sound designer and naturalist Doug Quin has been highly attuned to sound since he was a young child growing up in Algeria under the threat of bombing. Through his family's travels and his years at a Scottish boarding school, Doug fell in love with the outdoors, and especially with wintery landscapes. He later transformed his deep curiosity about nature and skills in music and art into a prolific career. Since the early 1980s Doug has been making field recordings in every corner of the Earth, and putting them to use in work spanning all media. His extensive credits include designing sound for films such as Jurassic Park 3 and countless nature documentaries, collaborating with the Kronos Quartet, composing soundscapes for museums and art galleries, releasing albums, and contributing planetary ambiences to the score of the game Spore. This episode of Conversations touches on the natural world, Jurassic Park 3, animals, nature, silence, Antarctica, origin stories, Scotland, Algeria, birding, birdsong, war, bombing, resilience and family.
When you picture a dinosaur, what does it look like? For Jingmai O'Connor, paleobiologist and associate curator of reptiles at the Field Museum of Chicago, the dinosaurs she studies look a lot more like birds."If you looked at an artist's reconstruction of something like Velociraptor or Microraptor ... you would see that it pretty much looks the same as a bird," Jingmai says. "In terms of the plumage, the soft tissues covering the body, it would have looked very, very birdlike."In this episode, Short Wave delves into the dinosaur-avian connection. Which dinosaurs had feathers? Were they using them to fly? And once and for all – what are those ancient dinosaurs' relationship to birds today? Have other dinosaur questions you want us to unravel? Email us at shortwave@npr.org — we'd love to hear from you!Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at plus.npr.org/shortwave.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
La durée de vie des dinosaures, ces créatures fascinantes ayant dominé la Terre pendant des millions d'années, varie considérablement en fonction des espèces. Contrairement à l'idée populaire selon laquelle les dinosaures vivaient tous des centaines d'années, leur espérance de vie était influencée par leur taille, leur mode de vie et leur environnement. Durée de vie des dinosaures : une question de tailleLes petits dinosaures, comme les Compsognathus ou les Velociraptors, vivaient généralement moins longtemps, leur durée de vie étant comparable à celle des mammifères de taille similaire. Ils atteignaient rapidement leur maturité sexuelle pour compenser un taux de mortalité plus élevé, et leur espérance de vie moyenne se situait autour de 10 à 20 ans. En revanche, les dinosaures géants comme les sauropodes (Apatosaurus, Brachiosaurus) ou les théropodes de grande taille (Tyrannosaurus rex) avaient une espérance de vie bien plus longue, atteignant parfois 70 à 100 ans. Leur grande taille et leur lente croissance leur conféraient une protection contre les prédateurs, ce qui augmentait leur longévité. Facteurs influençant leur longévitéLa croissance des dinosaures est un facteur clé pour comprendre leur durée de vie. Les paléontologues analysent leurs os fossilisés, en particulier leurs anneaux de croissance, comparables aux cernes des arbres. Ces anneaux permettent d'estimer leur âge et leur rythme de croissance. Par exemple, le célèbre T. rex atteignait sa taille adulte en 20 ans mais pouvait vivre jusqu'à environ 30 ans. Le métabolisme des dinosaures joue également un rôle. Bien que leur métabolisme exact reste débattu, il est probable qu'ils avaient une physiologie intermédiaire entre celle des reptiles modernes et des oiseaux. Les dinosaures géants, avec un métabolisme plus lent, vivaient plus longtemps que les plus petits, au métabolisme rapide. Comparaison avec les espèces modernesLes dinosaures modernes, les oiseaux, ont une durée de vie très variable. Les petits passereaux vivent généralement quelques années, tandis que les grands oiseaux comme les perroquets peuvent atteindre 80 ans. Cela reflète en partie la diversité des dinosaures disparus. En somme, la durée de vie des dinosaures était extrêmement diverse, allant de quelques décennies pour les petits carnivores à près d'un siècle pour les géants herbivores. Ces durées reflètent l'adaptation de chaque espèce à son environnement, témoignant de la diversité incroyable de ces anciens habitants de la Terre. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
In this thrilling episode of Challenge Accepted, Frank and Thomas kick off 2025 with a bang, diving into the cinematic wonder that is Jurassic Park. As part of their month-long celebration of John Williams, they dissect the magic of Spielberg's dinosaur epic, exploring its groundbreaking special effects, unforgettable characters, and the iconic score that continues to inspire. Along the way, they reflect on the film's themes, discuss its cultural impact, and share personal stories of experiencing the movie for the first time. Whether you're a lifelong fan or a newcomer, this episode is a celebration of everything that makes Jurassic Park legendary. Timestamps: 00:00:00 Introduction and John Williams Month announcement 00:01:22 Reflecting on the community's fundraising efforts for the animal shelter 00:02:01 Excitement for 2025 conventions, including WonderCon 00:05:28 Exploring the iconic T-Rex breakout scene and its practical effects 00:06:56 Nostalgia: First encounters with Jurassic Park as kids 00:12:28 Ethical science and Dr. Malcolm's iconic “Life finds a way” speech 00:25:00 Dinosaurs, birds, and the evolution of paleontological science 00:28:30 The emotional power of John Williams' score and its lasting legacy 00:36:00 Fun facts about the sound design, including the T-Rex roar Takeaways: Jurassic Park blends practical and CGI effects to create timeless visual storytelling. John Williams' score seamlessly balances wonder and fear, making it a cornerstone of the film's emotional impact. The film's themes of playing God and the ethical dilemmas of science are still relevant today. Practical effects, like the animatronic T-Rex, elevate the movie's realism and longevity. The movie's cultural impact inspired a generation of paleontologists and remains a benchmark for adventure cinema. Memorable Quotes: “Life finds a way.” – Dr. Ian Malcolm (Jurassic Park) “John Williams mixes wonder and fear in his score, leaning one way or the other depending on the scene, but never forgetting the other side.” – Frank “If they opened up Jurassic Park today, my ass would be there so fast.” – Thomas “You didn't stop to think if you should.” – Dr. Ian Malcolm, reflecting on ethical dilemmas. Call to Action: Love what you hear? Subscribe to Challenge Accepted wherever you get your podcasts and leave us a review! Your support keeps the conversation alive. Links: GeekFreaksPodcast.com: Your go-to source for all geek news and updates! Social Media: Follow us for behind-the-scenes content and more discussions: Instagram: @challengeacceptedlive Twitter: @CAPodcastLive TikTok: @challengeacceptedlive Apple Podcast Tags: Jurassic Park, John Williams, Spielberg, T-Rex, Jurassic Park review, Challenge Accepted podcast, movie analysis, 90s movies, iconic movie scores, film nostalgia, practical effects, CGI, dinosaurs, ethical science, Ian Malcolm, Dr. Grant, John Hammond, velociraptors, movie soundtracks, Geek culture, Challenge Accepted, podcast episode, timeless movies, WonderCon, movie breakdown, Spielberg movies.
In this episode, Alex starts by discussing how the Trump 2.0 cabinet is like the Velociraptors in Jurassic Park — they've figured out how to open the doors and now they are a danger. The training wheels are off and now they are ready to ride. Alex is also irritated that Trump is rewarding the bad behavior of people like Elise Stefanik with nominations to key positions. For the second part of the episode, Alex talks with his good friend Cole Costello about the election. They discuss what it means for labor movements, whether a new guilded age is coming, and much more!
Velociraptor (and Oviraptor & Saurornithoides) were named exactly 100 years ago to the day! We're celebrating Velociraptor's 100 year anniversary by going through what we now know about this awesome little dinosaur.For links to every news story, all of the details we shared about Velociraptor, and our fun fact check out https://iknowdino.com/Velociraptor-Episode-519/Join us at www.patreon.com/iknowdino for dinosaur requests, bonus content, ad-free episodes, and more.Dinosaur of the day Velociraptor, A small predatory dinosaur that had some of the most infamous weaponry of any prehistoric animal..In dinosaur news this week:It's November, which means it's Dinovember!On November 7, 1924 (almost exactly 100 years ago, Henry Fairfield Osborn named Velociraptor This episode is brought to you by Princeton University Press. They have four brand new dinosaur books: The Princeton Field Guide to Predatory Dinosaurs, Birds of the Mesozoic, The Little Book of Dinosaurs, and Uncovering Dinosaur Behavior. On December 4, we'll be discussing Uncovering Dinosaur Behavior in depth as part of a special book club segment. Get your copy now and read along with us! Go to press.princeton.edu and use promo code PUP30 for 30% offSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
When a giant asteroid hit Earth around 66 million years ago, it created a massive disaster that wiped out the dinosaurs. But somehow, many mammals survived, and scientists have some ideas why. Unlike the dinosaurs, early mammals were small and could burrow underground or hide in small spaces, which helped them escape the intense heat and fires. They were also able to eat a variety of foods, like seeds and insects, which made it easier for them to survive when plant life was scarce. Dinosaurs, on the other hand, mostly needed specific plants or prey, which disappeared after the impact. Plus, mammals could regulate their body temperature better, helping them survive in the extreme cold that followed. All these survival skills meant mammals had a better chance to make it through—and eventually thrive—while dinosaurs sadly didn't. Credit: Prehistoric Planet / Apple TV+ CC BY-SA 3.0 https://creativecommons.org/licenses/... Velociraptor meeting 02: By KaiserKaijin3DX, https://prehistoric-planet.fandom.com... Imperobators leap 03: By KaiserKaijin3DX, https://prehistoric-planet.fandom.com... Juramaia NT: By Nobu Tamura, https://commons.wikimedia.org/w/index... CC BY-SA 4.0 https://creativecommons.org/licenses/... Horseshoe crab righting: By Rhododendrites, https://commons.wikimedia.org/wiki/Fi... Chel1000: By Jon Houseman, https://commons.wikimedia.org/wiki/Fi... Wild Platypus 4: By Klaus - https://flic.kr/p/iFn9PW, CC BY-SA 2.0 https://creativecommons.org/licenses/..., https://commons.wikimedia.org/w/index... Stock materials (photos, footages and other): https://www.shutterstock.com Animation is created by Bright Side. #brightside ---------------------------------------------------------------------------------------- Music from TheSoul Sound: https://thesoul-sound.com/ Listen to Bright Side on: Spotify - https://open.spotify.com/show/0hUkPxD... Apple Podcast - https://podcasts.apple.com/podcast/id... ---------------------------------------------------------------------------------------- Our Social Media: Facebook - / brightside Instagram - / brightside.official Tik Tok - https://www.tiktok.com/@brightside.of... Snapchat - / 1866144599336960 Stock materials (photos, footages and other): https://www.depositphotos.com https://www.shutterstock.com https://www.eastnews.ru ---------------------------------------------------------------------------------------- For more videos and articles visit: http://www.brightside.me Learn more about your ad choices. Visit megaphone.fm/adchoices
Can't get enough Smash Boom Best? Here's a bonus episode! Sign up for Smarty Pass and you'll get a bonus episode like this every month. Usually, these episodes are just for Smarty Pass subscribers but today we're sharing one with all of our listeners. To get more, sign up for Smarty Pass at smartypass.org How many times have you asked yourself: which is better, Sanden or a Velociraptor? On the one hand, Velociraptors hunt in packs but on the other hand, Sanden Totten is funny. Sanden has great hair but Velociraptors have been featured in several movies. It's impossible to decide. Well, grab your Smarty Pass because, with the help of a fun game, we'll decide which is best: Sanden or a Velociraptor!
Hey friends, today I'm putting my blue hat on and dipping my toes in incident response by way of playing with Velociraptor, a very cool (and free!) tool to find evil in your environment. Perhaps even better than the price tag, Velociraptor runs as a single binary you can deploy to spin up a server and then request endpoints to “phone home” to you by way of GPO scheduled task. The things I talk about in this episode and show in the YouTube stream are all based off of this awesome presentation from Eric Capuano, who also was kind enough to publish a handout to accompany the presentation. And on a personal note, I wanted to share that Velociraptor has got me interested in jumping face first into some tough APT labs provided by XINTRA. More to come on XINTRA's offering, but so far I'm very impressed!
It is one of the most iconic members of the ceratopsian family. The long nose horn and frill spikes makes this dinosaur a remarkable species.
Heute bei Dr. Hart und Dr.Zart: Ein Saurier, der beim Liebesspiel stört
hello team we are back. casual lil ep in the studio. covered a lot in this one. dillon went full on dinosaur with quite possibly a record for funniest noise made by a human. zach laughed. huskers kicked butt. we will return
Some headbutting animals suffer brain damage from the shock; Plus Mississippi has a new most complete dinosaur; Histology can help tell a dinosaur fossil from other dinosaurs; and more.For links to every news story, all of the details we shared about Saurornithoides, and our fun fact check out https://iknowdino.com/Saurornithoides-Episode-505/Join us at www.patreon.com/iknowdino for dinosaur requests, bonus content, ad-free episodes, and more.Dinosaur of the day Saurornithoides, a troodontid from Mongolia named in 1924 by Osborn in the same paper as Velociraptor.In dinosaur news this week:Paleontologists reviewed what it means to have a dome-head and to headbutt like a pachycephalosaurid (and other prehistoric animals)Mississippi has a new most complete dinosaur, but the species is still a mysteryHistology can tell us if a fossil belonged to a dinosaur or another type of animals This episode is brought to you by Brilliant.org They have courses that can help you better understand the latest developments in paleontology. From chemistry which underlies the fossilization process to data science that is used to model dinosaur populations. Start your 30-day free trial today! Plus I Know Dino subscribers can get an extra 20% off a premium annual subscription at Brilliant.org/iknowdinoNL/You can win a large Spinosaurus tooth, fossilized leaf, and more by winning our Di-Know-It-All Challenge! Each week from episode 502 to 509 we'll read a puzzle on the show which you can enter to win by answering questions. This week you can enter at bit.ly/dinochallenge505 and if you're a patron you can answer the patron question at patreon.com/posts/108019451. All the rules for the challenge are at bit.ly/dinochallenge24See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Our guest this week is Dr. David Hone, a palaeontologist (yes, like Ross from Friends), and he is here to blow our minds with amazing facts about Dinosaurs. Could you have a pet Velociraptor? How did they become extinct? How do we know they actually existed in the first place? Plus, there is a very bizarre and hilarious comparison between a Dinosaurs penis and a duck's... no, we're not joking. To buy a copy of David's book, click here: https://www.waterstones.com/book/the-future-of-dinosaurs/david-hone/9781473692282 Come and see us live at The Clapham Grand! Tickets only £15: https://claphamgrand.com/whats-on/?listing-type=joe-marler To get in touch with us, email joe@crowdnetwork.co.uk If you would like to be a guest on the show, click here: https://docs.google.com/forms/d/1rfSo3PVJgtBRZHCCAZndem-iyy2EdvGcEYDqycsM2aQ/viewform To get ad-free and longer episodes on Apple, hit the 'grow the show' button or click: https://apple.co/3sAX0xR On Spotify you can subscribe for £1 a week by clicking this link: https://anchor.fm/thingspeopledo To become an official sponsor, go to Patreon.com/thingspeopledo To grow the show on socials, look for @thingspeoplepod on Instagram, Twitter and TikTok If you'd like to enquire about commercial partnerships with our podcast, email Ryan Bailey ryanb@crowdnetwork.co.uk Music courtesy of BMG Production Music Learn more about your ad choices. Visit podcastchoices.com/adchoices
Every week we'll 3D print designs from the community and showcase slicer settings, use cases and of course, Time-lapses! This Week: Articulated Baby Velociraptor By kzaa https://www.thingiverse.com/thing:6711401 CR10S SmartPro Purple Green PLA 02hr 01mins X:248 Y:180 Z:54mm .2mm layer / .4mm Nozzle 6% Infill / 1mm Retraction 200C / 60C 13g 60mm/s ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Adafruit on Instagram: https://www.instagram.com/adafruit Shop for parts to build your own DIY projects http://adafru.it/3dprinting 3D Printing Projects Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOWD2dJNRIN46uhMCWvNOlbG 3D Hangout Show Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVgpmWevin2slopw_A3-A8Y Layer by Layer CAD Tutorials Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVsMp6nKnpjsXSQ45nxfORb Timelapse Tuesday Playlist: https://www.youtube.com/playlist?list=PLjF7R1fz_OOVagy3CktXsAAs4b153xpp_ Connect with Noe and Pedro on Social Media: Noe's Twitter / Instagram: @ecken Pedro's Twitter / Instagram: @videopixil ----------------------------------------- Visit the Adafruit shop online - http://www.adafruit.com/?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Subscribe to Adafruit on YouTube: http://adafru.it/subscribe Adafruit Monthly Deals & FREE Specials https://www.adafruit.com/free?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Join our weekly Show & Tell on G+ Hangouts On Air: http://adafru.it/showtell Watch our latest project videos: http://adafru.it/latest?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting 3DThursday Posts: https://blog.adafruit.com/category/3d-printing?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting New tutorials on the Adafruit Learning System: http://learn.adafruit.com/?utm_source=youtube&utm_medium=videodescrip&utm_campaign=3dprinting Music by Bartlebeats https://soundcloud.com/adafruit -----------------------------------------
It's an afternoon at AutoVos with Brown Caleb, White Mark and Hostus Maximus Justin Fort. There's a good chat on recoloring your coach with the new SVG dry PPF (SVGIP.com), tinting technique (using XPel), and a secret tint job for a Porsche that's heading for Kansas, and a visit from the Boys in Brown (trucks) carrying the new dealer kit from SVG for Caleb. As per the usual, the gearhead goons get off course and have fun with the goodness: snorkels and rooftop tents (boo!), gun purchase numbers for Q1 in 2024 (yay!), and Nurburgring versus LeMans versus the Pike's Peak Hillclimb (soon). It's in there: a 996 Carrera 2 and a 997 Carrera 4S Targa, Porsche hand placement, Dos Santos for lunch, Putin's aversion to alcohol, Albright's aversion to Putin and JustMark's aversion to Albright on SMG753.
It's an afternoon at AutoVos with Brown Caleb, White Mark and Hostus Maximus Justin Fort. There's a good chat on recoloring your coach with the new SVG dry PPF (SVGIP.com), tinting technique (using XPel), and a secret tint job for a Porsche that's heading for Kansas, and a visit from the Boys in Brown (trucks) carrying the new dealer kit from SVG for Caleb. As per the usual, the gearhead goons get off course and have fun with the goodness: snorkels and rooftop tents (boo!), gun purchase numbers for Q1 in 2024 (yay!), and Nurburgring versus LeMans versus the Pike's Peak Hillclimb (soon). It's in there: a 996 Carrera 2 and a 997 Carrera 4S Targa, Porsche hand placement, Dos Santos for lunch, Putin's aversion to alcohol, Albright's aversion to Putin and JustMark's aversion to Albright on SMG753.
PARENTS: you can now subscribe to us on YouTube, where our live action puppet show featuring Froggy the Gator is now posted! Just click here. Froggy, Mr. Hummus and Baby the Gator are playing robot dinosaurs. But when they get teleported to the Robot Dinosaur Museum and see a familiar foe, their mission becomes that much more important.
In this Fanbase Feature, The Fanbase Weekly co-host Bryant Dillon and special guests Dave Baxter (co-host – The Wine And... Podcast) and Travis Rivas (founder - Super-Abled Comics, writer - Accidental Aliens, The Unstoppable Cherub) participate in a thorough discussion regarding Jurassic Park: Raptors Attack and Jurassic Park: Raptors Hijack (1994) in light of these comic book mini-series' 30th anniversary, with topics including where the books succeed and flounder, what the Velociraptors represent in regards to the humans who attempt to possess them, what the Jurassic World franchise could learn from these comic books, and more. (Beware: SPOILERS for Jurassic Park: Raptors Attack and Jurassic Park: Raptors Hijack abound in this panel discussion!)
It's return guest season here at Latent Space! We last talked to Kanjun in October and Jonathan in May (and December post Databricks acquisition): Imbue and Databricks are back for a rare treat: a double-header interview talking about DBRX from Databricks and Imbue 70B, a new internal LLM that “outperforms GPT-4o” zero-shot on a range of reasoning and coding-related benchmarks and datasets, while using 7x less data than Llama 3 70B.While Imbue, being an agents company rather than a model provider, are not releasing their models today, they are releasing almost everything else: * Cleaned-up and extended versions of 11 of the most popular NLP reasoning benchmarks* An entirely new code-focused reasoning benchmark* A fine-tuned 70B model, built with Meta Llama 3, to identify ambiguity* A new dataset of 450,000 human judgments about ambiguity* Infrastructure scripts for bringing a cluster from bare metal to robust, high performance training* Our cost-aware hyperparameter optimizer, CARBS, which automatically and systematically fine-tunes all hyperparameters to derive optimum performance for models of any sizeAs well as EXTREMELY detailed posts on the infrastructure needs, hyperparameter search, and clean versions of the sorry state of industry standard benchmarks. This means for the FIRST TIME (perhaps since Meta's OPT-175B in 2022?) you have this level of educational detail into the hardware and ML nitty gritty of training extremely large LLMs, and if you are in fact training LLMs of this scale you now have evals, optimizers, scripts, and human data/benchmarks you can use to move the industry forward together with Imbue.We are busy running the sold-out AI Engineer World's Fair today, and so are unable to do our usual quality writeup, however, please enjoy our show notes and the excellent conversation! Thanks also to Kanjun, Ashley, Tom and the rest of team Imbue for setting up this interview behind the scenes.Video podTimestamps* [00:00:00] Introduction and catch up with guests* [00:01:55] Databricks' text to image model release* [00:03:46] Details about the DBRX model* [00:05:26] Imbue's infrastructure, evaluation, and hyperparameter optimizer releases* [00:09:18] Challenges of training foundation models and getting infrastructure to work* [00:12:03] Details of Imbue's cluster setup* [00:18:53] Process of bringing machines online and common failures* [00:22:52] Health checks and monitoring for the cluster* [00:25:06] Typical timelines and team composition for setting up a cluster* [00:27:24] Monitoring GPU utilization and performance* [00:29:39] Open source tools and libraries used* [00:32:33] Reproducibility and portability of cluster setup* [00:35:57] Infrastructure changes needed for different model architectures* [00:40:49] Imbue's focus on text-only models for coding and reasoning* [00:42:26] CARBS hyperparameter tuner and cost-aware optimization* [00:51:01] Emergence and CARBS* [00:53:18] Evaluation datasets and reproducing them with high quality* [00:58:40] Challenges of evaluating on more realistic tasks* [01:06:01] Abstract reasoning benchmarks like ARC* [01:10:13] Long context evaluation and needle-in-a-haystack tasks* [01:13:50] Function calling and tool use evaluation* [01:19:19] Imbue's future plans for coding and reasoning applications* [01:20:14] Databricks' future plans for useful applications and upcoming blog postsTranscriptSWYX [00:00:00]: Welcome to the Latent Space Podcast, another super special edition. Today, we have sort of like a two-header. John Frankel from Mosaic Databricks, or Databricks Mosaic, and Josh Albrecht from MBU. Welcome.JOSH [00:00:12]: Hey, glad to be here.SWYX [00:00:14]: Thank you for having us. Hey, so both of you are kind of past guests. Jonathan, you were actually one of the most popular episodes from last year talking about MPT7B. Remember the days when we trained large models and there was 7B?JONATHAN [00:00:30]: Yeah, back when reproducing LLAMA1-7B was considered a huge accomplishment for the field. Those are the good old days. I miss that.SWYX [00:00:38]: As the things have accelerated a lot. Actually, let's do a quick catch up and Josh, you can chime on in as well. So Databricks got acquired. I talked to you at New York.JONATHAN [00:00:45]: Mosaic got acquired, although sometimes it feels like Mosaic acquired Databricks because, you know, we're having a lot of fun being here. But, you know, yeah.SWYX [00:00:52]: Yeah. I mean, you are chief scientist now of Databricks.JONATHAN [00:00:55]: Chief AI scientist. Careful with the title. As much as I would love to understand how Spark works, I'm going to have to defer that to much smarter people than me.SWYX [00:01:03]: Got it. And I don't know about like what you would highlight so far as a post-acquisition, but the most recent news is that you guys released DBRX. Is that the thing that most people should be aware of?JONATHAN [00:01:13]: Actually, that's no longer the most recent news. Honestly, the most recent news, we announced this, but it was at our Data and AI Summit last week. So it was announced among like 100,000 other things, is that we finally released our text to image model, which has been a year in the making through a collaboration directly with Shutterstock. There was a lot of work put into finding a dataset that we were comfortable with working on and trying to build a model that honestly, I felt like I could trust and that others might be able to trust to put out in the world. So that model was released last week. It's unfortunately just available via API due to the fact that the data is quite sensitive and quite valuable. It's Shutterstock's entire business in a lot of ways, but I'm still really excited that there's now a model that is trained on a dataset where the provenance of every single image is known, and it's a damn good model. So I'm really proud of the team on that.SWYX [00:01:55]: Yeah, amazing. Josh, do you have any thoughts on image model questions?JOSH [00:01:59]: That is not my area of expertise, but I was excited to see the release of it last week as well, and very happy that you guys did a nice job on the data side of everything there. So that was cool to see.SWYX [00:02:09]: I think what's unusual is like, I think Shutterstock's doing multiple deals in multiple labs. So what is the Shutterstock model? Like, I guess, is this the house model for Shutterstock? Is this Databricks' version of the Shutterstock model? Like, what is this?JONATHAN [00:02:22]: The way that I would think about it is that Shutterstock is doing an amazing business in AI across the board. Their dataset is kind of widely known to be the best stock photos dataset in the world, the most comprehensive, the biggest. When you think about like, what dataset am I going to train a multimodal model on? You call Shutterstock. And I, at least I've heard in the news, like OpenAI, Google, Meta, Apple have all called Shutterstock and made those deals. So a lot of models have had Shutterstock data incorporated into them. But this is the only model I know of so far where it was, you know, exclusively and specifically trained just on the vanilla Shutterstock data. There was nothing else mixed in. We didn't go and scrape the web and find other data or combined datasets or anything like that. And so this is, in some sense, the house blend. But the other piece is that it's just a dataset where the provenance of every image is known in public. Where did the data come from? It is the Shutterstock collection. That's it. You know, nothing less, nothing more. And certainly being at Databricks, if I've learned one thing, I've learned about enterprise customers and what they want out of AI. And one of the things they ask for most is just, what can you tell me about the data the model was trained on? And here, especially for text to image models, where images are just tricky subject matter, there's been a lot of kind of legal conversation about images, especially. It's nice to just have something where I can point to it and say, you know, if you want to know where the images came from, these are what they are and this is how they got there.SWYX [00:03:36]: I will talk a little bit about Databricks because it's relevant to the rest of today's episode. So Databricks, sorry, I keep misspeaking. It's DBRX.JONATHAN [00:03:46]: DBRX, actually, there's been a pronunciation update. It is now D-B-Rex. So we have decided to add a dinosaur mascot because what model doesn't like a mascot? So literally, I wish I could pull it up. There is a little plush dinosaur that we had made. It's like the world's cutest dinosaur, but it is the official mascot of D-B-Rex. And there's a little dinosaur logo that, you know, you'll probably see around a little bit more because DBRX is a mouthful, but D-B-Rex, like, you know, it's just kind of...SWYX [00:04:13]: Rolls off the tongue. I love mascots. Like every company should have a mascot. And I think Hugging Face got it right. You need an emoji mascot because that's the minimal viable image.JONATHAN [00:04:21]: I probably shouldn't talk at all about, you know, Velociraptor, but, you know, that's a, maybe that's something we can talk about later in the summer. I'll just leave it at that.SWYX [00:04:28]: Okay. That's a hint to names. I feel like your names leak a lot of alpha. So just to quickly cover the headline details, DBRX, as Make Sure Experts model, that's fairly big, 132 billion total parameters, so 36 billion active on any input, pre-trained on 12 trillion tokens of text and code, and did really well on evals to the point where you had to dye your hair blue. That's my high level conclusion.JONATHAN [00:04:53]: Never make a bet with your team two weeks out from model launch, even when, you know, human eval is looking quite bad. Because if you set some bar, even if it's arbitrary and you think there's no way in hell they're going to hit it, apparently money doesn't motivate people anymore. Humiliating their boss motivates people. So Josh, you should really take a hint from this. You know, you cannot pay someone enough money to make up for you dyeing your hair blue.JOSH [00:05:15]: I'll keep that in mind for our next model.SWYX [00:05:17]: It works. So speaking of Imbue's next model, perhaps Josh, you want to actually just say hi to the general sort of latent space audience and talk about what we're releasing today. Yeah.JOSH [00:05:26]: I'm Josh, CTO of Imbue, and we're not releasing the model. We're not releasing the weights, but we are releasing a bunch of different things that should make it easier for other people to make their own models. So I think right now, training foundation models from scratch is like a very difficult, time-consuming, expensive, kind of risky endeavor, especially for smaller companies. And the things that we're releasing hopefully make that at least a little bit easier. So the things that we're releasing fall into kind of three different buckets. One is infrastructure and scripts for dealing with the kind of hardware and hardware failures and understanding how well is the actually lowest level of thing actually working so that you can actually do your training at all and at a reasonable speed without having to constantly restart, etc. So infrastructure and training scripts. A second set of things is around the evaluation. So after you've trained it, like how well is this actually working and how do you know how well it's working? We're releasing a whole bunch of different data there, a new benchmark about code, reasoning, understanding, as well as our own private versions of 11 different open source benchmarks. So things like pool queue or ANLI, where we've gone through and kind of cleaned up the data as much as possible by looking at all the ones that models get wrong or that are flagged for ambiguity and also our own kind of private reproductions of those where we've done like a kind of clean room black box, like, okay, this is what the data set is supposed to be. Here are some examples. Let's make our own version of this to make sure that there is no data contamination, etc. To make sure that we're actually, you know, not testing on train. And then I think a final thing that we're releasing there is around 450,000 human judgments about ambiguity and question quality, which we used in the process of cleaning these evaluations and we also hope will be helpful for other people training kind of similar models. And then the third thing is CARBS, our hyperparameter, our cost-aware hyperparameter optimizer, which was especially helpful for being able to experiment at much smaller scales and then scale those experiments up to the much larger scale kind of on the first try without having to retry it. You don't want to be training, you know, 10, 20 different 70B models. You really want to get these larger modelsSWYX [00:07:30]: right on the first try.JOSH [00:07:30]: And so the ability to kind of tune things very precisely and learn scaling laws, not just for, you know, the like data and flops, but also for learning rate and all the other hyperparameters and see like how should you scale these things up was extremely valuable to us as we were training the larger models. Yeah, that's a lot of stuff.SWYX [00:07:49]: Yeah, exactly. So there's a bunch of stuffJOSH [00:07:50]: we'll have to go through all of it.JONATHAN [00:07:52]: Yeah, I just want to throw in how excited I am about this. This is the stuff that nobody ever talks about. That is the difference between success and failure in this stuff. Like, can you get your cluster to run? Can you get software on your cluster? Can you figure out what broke? Because fault tolerance is still not really built into any of the fundamental primitives of training models. And so if something breaks, you have to go figure out what broke, your job stops, you have to restart your job. It is a nightmare just to get to the point where anything can train on the cluster. A basic MPI hello world that has the GPUs talk to each other is hard enough, let alone actually training a model, let alone getting good performance out of the GPUs, let alone actually getting a model that converges to anything interesting. There's so many levels of things you have to accomplish. This is the kind of stuff that matters. I think to a point that Josh made earlier, before we got on here, there are plenty of weights out there. Nobody's released this.JOSH [00:08:46]: Yeah, that was part of the motivation actually is that there are lots of other things that are complimentary, but I have not seen nearly as much discussion about some of these other things that we think are pretty important. I mean, in some sense,SWYX [00:08:56]: I'm very excited to have Jonathan on because this is a little bit, you're a bread and butter with Mosaic. And I think you've released some part with Composer. And I think it's just really interesting to see like a different take, basically a full stack take that's kind of open source today.JONATHAN [00:09:18]: Yeah, it's really kind of, it's been an ordeal to figure this out. And every time something changes, whether it's a new GPU or even a new driver update, you get new creative errors and new things go wrong. And, you know, we've dealt with the weirdest things from, you know, our InfiniBand cables getting stolen from the data center twice, like in boxes before they arrived at the data center. Like, you know, Porch Pirate basically had stolen our InfiniBand cables back when those were hard to come by. To like, you know, weird recalls of switches to like the strangest stuff has happened. I have my favorite GPU failures I've seen, like ones where the GPU doesn't fail, it has a correctable memory issue and the memory correction causes the GPU to become a straggler and hold up the whole job. Like weird stuff happens and figuring out how to not just identify all of that, but then eventually productize it, is in some sense, the entire story of Mosaic and now Databricks in terms of our ML offering. Really, the thing we offer is we have gone through this suffering and figured out how to even productize that. It has been a pain in the butt.SWYX [00:10:20]: Yeah, it's a lot of work.JOSH [00:10:20]: I think my favorite failure was GPU is just giving wrong math. Like if they give errors, great, because you can see the errors, but if they just give you the wrong math back, not so fun.SWYX [00:10:30]: When did they give you wrong math?JOSH [00:10:32]: Like literally you could just, you know, add two things. For example, the numbers come back. They're not the numbers that they're supposed to be.JONATHAN [00:10:40]: I think it's important to say at this stage, just because like it, I think it goes without saying for Josh and I, but it's worth saying here, this isn't to say that like anything is wrong with us. It's not like NVIDIA did a bad job or, you know, Mellanox did a bad job or the like the server builder, the data center operator, the cloud provider, like the million other parties that are involved in building this. We are running these insane chips that are huge and complicated and built on tiny transistors at insane frequencies with insane heat in data centers that for the most part, were not built remotely for this kind of power or heat and have been retrofitted for this. Like failures happen on a good day with normal CPUs. And this is not a good day and not a normal CPU for the most part. It's fun to joke about all the weird things we see. This is not to say anybody's done anything wrong. This is just kind of part and parcel of working on a massive cluster running at multiple megawatts of power at a time.SWYX [00:11:32]: It's crazy. Yeah.JONATHAN [00:11:33]: So optical cables, like all sorts, like everything.SWYX [00:11:37]: I'll take the opportunity to start going to the sort of infra piece. There's just like a description of the infra just to give people a sense of what we talk about when we talk about massive clusters. So I'm just going to read off the blog post here. This post is about one cluster that has 4,092 H100 GPUs spread across 511 computers. They use unified fabric manager nodes, which manage the infinite band network. And you talk a little bit about your networking. Is there anything unusual about this setup that you'll call out to people?JOSH [00:12:03]: Yeah, actually this particular cluster is a little bit non-standard. The normal, like vanilla setup for these large clusters as vanilla as it can be is what's normally like a 127 node cluster. So closer to like 1024 GPUs instead of 4,000. Here we have a larger cluster. As you start to get into the larger clusters, the networking becomes a little bit more custom. It's a little bit more, it's a little bit trickier. It's a little bit more difficult to get these things to all be able to talk to each other at the same speed. And so this has, in this particular case, this is a three tier network architecture instead of two tiers, kind of the normal one. So most of the clusters are a little bit smaller. As you get to even larger scales, then this becomes even much more complicated,SWYX [00:12:43]: much more expensive.JOSH [00:12:43]: So we chose this particular scale, kind of knowing our own workloads and kind of what we wanted to do. This was kind of the right size for us. But yeah, I think it's not exactly vanilla already. It's already getting into kind of the custom territory.SWYX [00:12:54]: So my understanding is that there, and is there any part of this that comes with the Voltage Park deal that you guys had? Is that part of the hardware that you got from the deal with them?JOSH [00:13:04]: Yeah, so we worked really closely with Voltage Park to set up all their clusters and infrastructure and everything and kind of decide even like what to order, how should the networking work? Like we were very involved in kind of the construction and bring up of this. And that's what this post is about, is about that process of like bringing up all these, there's like different clusters in different places of different scales. So in this particular post, we're talking about this one 4096 GPU, but there are other clusters that they have as well. And we were very closely involved with figuring out the exact architecture and kind of the trade-offs that go along with picking, you know, those exact components. You really don't want to like place the wrong order because it takes months to get it and it's very expensive. So yeah, we were happy to help out with that.JONATHAN [00:13:43]: And then your bit of good cables get stolen.SWYX [00:13:44]: Yeah, yeah, exactly.JOSH [00:13:47]: We wanted to make sure that we ended up with compute that would work for us and that would also work for their other customers. And so we kind of helped design something so that we would get exactly what we were looking for. We knew that these kinds of details would be super important and that getting down to the level of the hardware and like having these good scripts and everything was going to be a core part of like actually getting this to work. I'm very glad that we did that. I don't think that most companies kind of take that full stack approach, but for us, it certainly paid off.SWYX [00:14:12]: Yeah, it's basically sort of built to spec. It's interesting that relationship because you usually, for the rest of us who don't operate at your scale, we take whatever we can get from cloud providers, but you are basically co-designing from the single machine up. And you described that a little bit. Do you want to take us through the process that you described here?JOSH [00:14:27]: Yeah, so for the actual, like the blog post and kind of bringing these machines online.SWYX [00:14:32]: Yeah.JOSH [00:14:32]: So yeah, I think the process, as we have it broken down in the blog post, there's kind of a few different layers. First is like getting the individual machines to work at all and then getting the machines to actually be able to talk to each other. So getting the InfiniBand networking to work and then getting to a point where, you know, not just the machines are working and they can talk to each other, but everything is actually working correctly. There's a big gap between like it's working at all to it's working perfectly correctly. And then after you have all this stuff working perfectly correctly, nice and healthy, then now you get into kind of the software data, like training issues. And then after that, you're still not done. Like now, even once you're training at full speed, things are going to fail over time. Things are going to change. There's going to be new, you know, firmware updates. Like how do you kind of deal with this change and flux over time without going crazySWYX [00:15:16]: and pulling your hair out,JOSH [00:15:16]: trying to like reproduce things or understand why there were regressions. And so there's a lot of work to kind of automate the infrastructure tooling as well. And kind of the first step, like bringing these things online in the first place, you know, you have hundreds of machines at this point. So you don't necessarily want to be like walking around with like a CD-ROM or a USB drive, like plugging it in with your keyboard, like hitting next, next, next on the OS install. That's not how this works. You do that for one machine. And then you use, we use this thing called Metal as a Service to bring up all the other machines. So it's a kind of server that can kind of install the operating system on these other machines. So most like when you're talking about these machines, like each machine is, you know, on the order of hundreds of thousands of dollars. So they usually come with a kind of out-of-band management interface as well. So they don't, they have their InfiniBand networking. They have their normal 100 gigabit per second Ethernet networking. These are like dual, redundant, et cetera. And then you also have this extra out-of-band management network. So you can log in and you can see like the boot screen or you can see the blue screen of death. You can like get in there and actually see what was wrong, which is pretty fun. And it makes it like possible to automate a lot of this work. So the beginning of that, and the blog post goes into much more detail about like exactly how we set these up and kind of the other errors that we ran into. When you're bringing these online, you'll definitely have failures. Even if they all worked in the factory, they get shipped, some parts come loose, something fails, something goes wrong. So when you're bringing them online, there'll be some that don't quite work for all sorts of reasons. As you start to be working with machines at this scale, like if something happens one in a thousand times, you're like pretty likely to see it. And so you can get pretty rare, weird things, especially since we had fairly early builds and fairly early versions of this hardware. Like these are some of the like first machines that were ever produced, some of the first GPUs. So you've got some extra special things there. We definitely worked with Dell, for example, on making fixes in the firmware level to be like, okay, like this thing is wrong. Like we need to update this at the firmware to like actually fix this particular thing. So we worked pretty closely with Dell and Nvidia. Yeah, that's what I'm saying. Like this stuff gets complicated. And the thing is like, you know, taking a step back, the whole reason we're doing this, right, is that we knew that this was going to be complicated. There would be these kinds of failures. And if we're just using, you know, AWS or some other cloud provider, these errors are still gonna be there and you're gonna have no way to know and no way to debug this and no way to diagnose what's going wrong. And so we would much rather be able to like call up Dell and say, hey, this isn't working. And they're like, yep, okay, cool. Let's debug it together. Oh, I see. Yeah, cool. We'll ship a firmware update and actually fix this for you. That was a much better experience than like, great, just magically fails. I guess we restart and hope that that machine goes away. Like that's not a very good place to be. So yeah, that's kind of the first place is getting to a place where like GPU training is working on your single node machines. You can observe stuff. We have tons of tooling around like, you know, Prometheus and all sorts of other tools for understanding what's going on in these machines because you don't want to be like logging into each one and looking at the temperature or something you really need to have tooling to collect all these metrics, et cetera. Unfortunately, all of the scripts that we have for this are like for this entire cluster and for all this infrastructure are a little bit like special purpose for our particular thing. So it's not that every script that we have, it's not that you can just like take this and plug this in. Even if we did open source all the tooling that we have, you'd still have to do like a lot of work to open source it. What we are releasing is as many of the things that we can that are going to be useful for other people. You're still going to have to have some way of kind of managing these things, making your own like logging aggregators, et cetera, et cetera. So that's kind of bringing them up to the like, you know, the single nodes that are working. From there, it goes into, I'm happy to keep going if you want. Well, I just want to leave the opportunity for JohnSWYX [00:18:53]: to comment if there's anything that's different from how he runs things.JONATHAN [00:18:57]: Oh, I mean, all I'll say is I'll endorse this and say this s**t is hard. Like this is really, really hard. And, you know, I have a special props to, you know, the folks in Vue because they were building this from the ground up. You know, at Databricks and at Mosaic, we typically work with cloud providers because some of this stuff is just, there's too much to handle. It's complicated. There's a lot to deal with. And this doesn't even get into things like physical security, you know, securing power if you're the data center operator. Like this gets infinitely complicated and you have to abstract somewhere. Like, you know, and then you get to the folks who are literally building their own custom chips and like, good God.SWYX [00:19:36]: Like, oh my God, that's, you know,JONATHAN [00:19:38]: if you're one of those folks, you're having, you know, pour one out for the infra people at some of the AI chip startups who are having a really, really interesting time right now. But this stuff is really hard. And I don't think we talk about it much because there's so many other things that are hard. But the other hard things, I think everybody's becoming pretty familiar with at this point. This is something that I don't think there's ever really been a comprehensive discussion of, at least not that I've seen.SWYX [00:20:00]: Yeah, so my impression is that you guys, Mosaic, have your own software for sort of spinning up and down machines, just like Imbue had to build. But Imbue probably, it sounds like Imbue, you guys went fuller stack. I don't know how to describe it. Like Mosaic is not working with Dell on like their firmware.JONATHAN [00:20:21]: No, no, we're typically working with like, you know, pick your cloud provider on their Dell firmware or what have you. Like, it's kind of, I think one of the things, I don't know, Josh, you can correct me on this. It's kind of impossible if you're doing training to not go all the way through the entire stack, regardless of what happens. Like somehow I'm still chatting with cloud providers about power contracts, even though the whole point of dealing with the cloud provider is not to have to think about power contracts. Somehow I'm still asking them about which InfiniBand provider they used this time to see if this is part of the bad batch of cables I encountered on that cloud provider or what have you. Or like, we're still talking about a firmware update from pick your provider. You can't not do this. It's convenient that they have data center staff who are worrying about what to send back to which provider when, and they have people who can go and wait for the InfiniBand cables so they don't get stolen outside. But, you know, it's kind of, it's impossible not to really go full stack if you're thinking about the infrastructure at all. I don't know, Josh, correct me. No, I think that's right.JOSH [00:21:17]: That's what we expected from the beginning as well, is that we would inevitably have to get into the details here. And I'm glad that we kind of just planned for it. I think it made it a lot easier from our perspective to have direct control over this. Instead of having to go to the cloud provider that goes to the data center, that goes to the supplier, we could just go direct to NVIDIA or DellSWYX [00:21:37]: or the data center,JOSH [00:21:37]: whoever was responsible and be like, hey, this thing needs to change. And they're like, oh, okay. Yeah, that is our responsibility. Great, we can fix that. So it was just a lot easier for us to fix these bugs than if we had to go through an extra layer of email.SWYX [00:21:48]: Something we discussed in the pre-show was that you had a rule of thumb for your cluster of reliability. You say here in the post, by and large, you expect around 3% of your machines to break every week. So you're basically going to turn through all your machines in a year.JOSH [00:22:04]: As it says in the post. So that would be true if it was a uniform failure like that. But as it says in the post, it's usually these kind of problematic nodes. And to be clear, that is the number that we've heard from other people is like they're having about 3%. I don't think we're experiencing failure rates that are that high. I think ours is actually quite a bit lower than that, probably because we've taken the time to like dig into a large, maybe larger number than we should have of these failures and get to the root cause of it and be like, oh, okay, like that's exactly what's going wrong.SWYX [00:22:33]: How do we fix this?JOSH [00:22:33]: How do we prevent this from happening? How do we make automated checks for this so that if it does happen, it just goes back to whoever owns that particular part of the process and they can fix it immediately.SWYX [00:22:43]: And that's part of what you're also open sourcing, which is the health checks, right? You got the NIC health checks, GPU health check, this space health check, Docker D message. I don't know what that is.JOSH [00:22:52]: That one is just a lot of stuff.SWYX [00:22:54]: Yeah.JOSH [00:22:55]: That one is one where we realized that actually like when these machines boot, sometimes they wouldn't actually boot cleanly all the way. Or when they rebooted, they had problems that they didn't have when they were working before, which was kind of frustrating. Like usually if you restart your computer,SWYX [00:23:08]: it gets better.JOSH [00:23:08]: Here you restart. It did not get better.SWYX [00:23:10]: It got worse.JOSH [00:23:10]: That was very frustrating. So this health check looks at every particular line we've ever seen from the boot, like in D message, like every single log line that your computer emitsSWYX [00:23:21]: and says like,JOSH [00:23:21]: have we ever seen this before?SWYX [00:23:23]: Is this expected?JOSH [00:23:23]: Is this in the right order? Or is there something out of place? If there's anything out of place, let me say, okay, great. Like now it goes into this, like longer, more triage list of like, all right, great. Like, is this acceptable?SWYX [00:23:33]: Should we flag this?JOSH [00:23:33]: Like, should someone take a look at this? So we're looking down at a very, very granular detail level, what's happening on these computers to make sure that nothing is out of place. And that's critical because without that, if you're running your training, as Jonathan said, and this thing is slow, like what are you supposed to do? Right?SWYX [00:23:49]: Like you really,JOSH [00:23:49]: you really want to be very certain that like all 4,000 of these GPUs are working like they're supposed to.SWYX [00:23:54]: We know that.JOSH [00:23:54]: And so if it's slow, it's because like we messed up the config or something else and not because of this earlier thing that's like really hard to detect in software later.JONATHAN [00:24:01]: Yeah. I think the, I'm just curious to ask,SWYX [00:24:03]: like, you know,JONATHAN [00:24:03]: suppose you were to set up another, let's say another H100 cluster and it were at a different data center. And instead of the vendor being Dell, it was super micro or what have you. How much of this would be repeatable? And how much of this would you have to redo? I, you know, I genuinely don't know.SWYX [00:24:18]: A decent amount.JOSH [00:24:19]: I think it would go a lot faster the second time. I think there's lots of learnings that we had. And also the blog post,SWYX [00:24:24]: you know, yes,JOSH [00:24:24]: we are releasing the health checks, releasing some scripts, but a lot of the valuable stuff is also in the blog post itself, in the details and kind of the, you know, the learnings that we've had and the sort of errors that we run into. We tried to as much as possible surface those to other peopleSWYX [00:24:36]: could learn from thoseJOSH [00:24:36]: and avoid the same mistakes or failures as well. But I think it would go a lot faster.SWYX [00:24:41]: Although, yes,JOSH [00:24:41]: there would certainly be some things that'd be a little bit different. I mean, there'd probably be different CPUsSWYX [00:24:46]: or whatever,JOSH [00:24:46]: but I think a lot of that stuff is less,SWYX [00:24:49]: it's less,JOSH [00:24:49]: that's the like, that's less variable. I think most of it would apply the second time around. Although I'm sure next timeSWYX [00:24:56]: we're building one,JOSH [00:24:56]: it'll probably be, you know, at a scale that's 10x as big with a different chip or something like this.SWYX [00:25:00]: And then who knows?JOSH [00:25:01]: Yeah, with Kinect X8,JONATHAN [00:25:02]: that will have its own fun behavior and all that good stuff. Yeah.SWYX [00:25:06]: Perhaps there's something that people don't discuss about, and you don't even talk about this in the blog, but I always wonder is what is the timeline that's like kind of reasonable for this amount of work, at least the initial stages? And also what does the team composition look like for setting up a cluster, right? Like what are the mix of skills that you typically would require to get all this going?JOSH [00:25:27]: I'm, I can't really speak to typical. One thing I am very proud of is how much we accomplished with such a ridiculously small team. Like our infrastructure team is like, you know, fluctuates from week to week, depending on like how many things are on fire and how much we need to build. But it's like between like three and six people, like it's small. It's not like some huge team of like tons and tons of engineers. But those people are very, very good at what they do. And so that has allowed us to get a lot of mileage out of out of these things. I think it's not that we're building everything, right? It's not that three to six people build this whole thing. I definitely want to like, you know, say thanks very much to Dell and H5 and NVIDIA and the other people that have done a lot of the work, like to bring up this cluster, you know, with 4000 GPUs and three tier networking, networking architecture, you have 12,000 cables. So that's 24,000 things that need to be plugged in. Like that's just a lot of stuff to plug in, right? And you don't want to mess it up. Like each one needs to be done correctly. Like it's a little bit loose. Like it doesn't really work.SWYX [00:26:23]: If you break it,JOSH [00:26:23]: you need to replace it. Like there's a lot of workSWYX [00:26:26]: that goes into this.JOSH [00:26:27]: Yeah.SWYX [00:26:28]: And then, you know,JOSH [00:26:28]: that's just like that's it. That's if you were to do everything right the first time.SWYX [00:26:32]: And if you didn'tJOSH [00:26:32]: have to fix anything. But inevitably, you know, you will have to replace something, which means like taking all the wires out, pulling the thing out, taking all the GPUs out, going and fixing some cable, putting it all back correctly, putting it back in, doing this every time. So there were a lot of people at Dell, NVIDIA and at H5 that all helped a ton with this stuff. I don't know the exact size of the Dell team. It also fluctuated over time.SWYX [00:26:55]: Yeah, excellent. And then, you know, you so you have all the hardware set up and now you're firing it up for a single node. There's a long description that you guys have about just like monitoring the MFU, right? And what each situation might look might be indicative of. One of the most interesting things to me that I saw from here is like, you know, if training immediately starts off at 60 to 80% MFU, something's wrong.SWYX [00:27:24]: But like, you know, like what what are like, you know, some anecdotes or, you know, notable scenarios here that you might you might call out as maybe counterintuitive or super interesting.JOSH [00:27:36]: There's just so many of them. I mean, one of them, which I think is probably pretty common, like common knowledge by this point. But like we did have a sort of likeSWYX [00:27:46]: which one was this exactly?JOSH [00:27:47]: I think for the MFU, like gradually getting worse over time. I think that one, when we saw that the first time we were like, what the heck is going on? Like, why does it get just like a little bit worse? This is so strange. Like, what is it getting lazy or tired or something? Like, is it heat? Like what's going on? And in this particular case, it was memory fragmentation. Because you have hundreds of machines, they're doing garbage collection slightly different times. And then they get slightly further apart and slightly more and more jittered until eventually they're all happening kind of at random times. And just like really messing up each one of your steps. So you just turn off garbage collection and call it a day, basically,SWYX [00:28:20]: to be honest.JOSH [00:28:20]: There's other things you can do if you want to be a little bit more sophisticated about it. But you can also just manuallyJONATHAN [00:28:25]: have it all garbage collect on some interval. Like that's what we've done. We just have a garbage collection callback that just runs. But I've seen the exact same thing.JOSH [00:28:33]: Yeah, yeah, exactly. So I thought that one was kind of funny. And we did trace that one down and look and we did find the actual call. Like, again, this goes to like having good tools. So we had really good tools where we could look at a bunch of like actual traces in C and be like, OK, cool. This is the thing that's taking a lot of time. Or like, you know, this is the thing that doesn't quite line up here. Like, oh, I guess it's garbage collection. OK, cool.SWYX [00:28:52]: Interesting.JOSH [00:28:52]: Yeah, let's just try taking it off.SWYX [00:28:54]: OK, great.JOSH [00:28:54]: That's what it was. Now we can fix it. So for each of them, like basically bugs are not hard if you have good tools. But if you don't have good tools, bugs can be very, very hard. So similarly for like heat, another thing that we saw was like, oh, you know, the CPU is getting throttled. OK, well, it's easy to see if you're monitoring the CPU throttling or monitoring the heat. If you're not monitoring that, it's really hard to know why it's just suddenly one of them is going slower. I noticed also in the pieceSWYX [00:29:17]: that you mentioned FSDP with 0.3. Actually, we met, I went to iClear and Guanhua from the DSP team was there presenting 0++. I was wondering if you want to make any call outs to, you know, particular open source or open library or open whatever implementation teams that were super helpful in your process. I think we ended up actuallyJOSH [00:29:39]: pulling from a whole bunch of different ones to pull things in into our own particular pipeline. So we use things from NVIDIA's, you know, Megatron stuff. We use stuff from probably DeepSpeed. I think we pulled in a bunch of different pieces from a bunch of different places. So it was really nice to see all these working open source like examples. I think I really appreciate all the effort that has gone into actually tuning these things because you can tune them, but it's a lot of work to like tune this stuff and do all this stuff from scratch. It's really nice to have like a working example. I think those are probably the two biggest ones, DeepSpeed and Megatron alone, but there are probably other ones as well.SWYX [00:30:13]: Is there a particular thing in the ecosystem where you would call out as like, you know, there should be something here that is open source, but like it's not really, it's like everyone kind of builds it on their own. I want to say something with the file system because everyone talks about the file system eventually.JOSH [00:30:28]: The file system actually was,SWYX [00:30:30]: I mean, we did somethingJOSH [00:30:31]: kind of dumb there. Like we have our own sort of local mirror so that we can, you know, like a crappy version of S3SWYX [00:30:38]: that's local,JOSH [00:30:38]: but it's just a pretty simple script, right?SWYX [00:30:41]: Like I think we run likeJOSH [00:30:41]: a little web server that just like serves files and then, you know, it can upload themSWYX [00:30:45]: and download them.JOSH [00:30:45]: Okay, great. And part of the reason we did that is that our internet connectionSWYX [00:30:50]: in the beginningJOSH [00:30:50]: was not the like full speedSWYX [00:30:52]: one that we wouldJOSH [00:30:52]: eventually have. And so we are a little bit more kind of bottlenecked in terms of internet bandwidth. And so we had this. I think we looked at a bunch of services out there like Minio and some other ones, but a lot of these like come with a lot of extra overhead and maintenance. And since we already have so much infrastructureSWYX [00:31:09]: to deal with,JOSH [00:31:09]: we kind of didn't want to, you know, bring in a whole other like cloud provider, virtualize something, something.SWYX [00:31:14]: We just wanted something simple.JOSH [00:31:14]: So we went with that, which has been quite helpful. Like our toolsSWYX [00:31:19]: are usually quite simple.JOSH [00:31:19]: It's like Bash and Python and SSH and Docker. Like we'd like to keep things simple so that's easier to debug, like less layers of infrastructure, less layers of abstraction, make it a lot easier to work with. Like we don't use Kubernetes,SWYX [00:31:30]: for example,JOSH [00:31:30]: and we just directly launch these things. And it's just been much easier to debug this way. One tool actually that does come into mind that I will call out is Kraken from Uber. That was great. We love that tool. We were a little bit skeptical. What is it?SWYX [00:31:44]: I'm sorry. Yeah.JOSH [00:31:45]: So Kraken is this, yeah, it's a distributed like Docker registry, basically, that uses BitTorrent to like transfer things between the machines in a sort of nice optimal way. Like in the very beginning, the naive way is like you have this one Docker registry, which was outside of the cluster. So every time we change an image, you know, there's many gigabytes that each of the 500 machines needs to download.SWYX [00:32:07]: So that just takesJOSH [00:32:07]: a really long time. So what this thing does is like just one of them downloads it and then like they all sort of broadcast all the pieces to each other. And it was just like a really nice, fast way of getting these images down. And it was very robust.SWYX [00:32:19]: Like there's a lotJOSH [00:32:19]: going on under the hood, but I think it's a pretty cool tool that we haven't really had any bugs with it at all. Amazing.SWYX [00:32:26]: Yeah. I mean, that's all my questions, I guess, for the info piece. I don't know if, John, you had something that you were sort of burning to ask or.JONATHAN [00:32:33]: No, all I can say is just sameSWYX [00:32:36]: in a lot of places, like, you know, and they're done thatJONATHAN [00:32:38]: seeing this plus one. I think the one big difference, you know, perhaps in philosophies is we've tried to basically standardize on as much commodity stuff as possible, just because, you know, I think the reason I asked about trying to do thisSWYX [00:32:50]: on multiple differentJONATHAN [00:32:50]: pieces of infrastructure is like, I think we're running on like six or seven different clouds right now. And everybody has done something slightly different. And my gosh, the little differences add up as you know, you've seen. And so, you know,SWYX [00:33:04]: our philosophy has been like, whatever the hellJONATHAN [00:33:05]: we can standardize, please let's standardize it. Like vanilla off the shelf FSDB.SWYX [00:33:10]: And like, you know,JONATHAN [00:33:10]: we wrote our own data loader, but we've tried to make that as much of a standard as we can across our infrastructure and in Databricks, because things just start getting really complicatedSWYX [00:33:18]: or like we useJONATHAN [00:33:18]: Kubernetes extensively because it at least gives us a uniform set of APIs. Like that's our hardware abstraction layer to a certain extent for everything else. So it's just, you know, a difference in philosophy there. But otherwise, like, yeah, this stuff is really, really hard. And I feel like we take for granted how much of this, you know, is done for us when you go and you just query chat GPT, for example. Like, oh my God, everything going on underneath that, you know, it's kind of a miracle that the machines boot up, let alone that you can like query a giant language model that's probably doing inference across multiple machines and was trained across thousands of machines. Like, you know, minor miracle.SWYX [00:33:54]: Yeah, it is an awesome amount of power that we invoke with a single API call that we take for granted these days. It's absurd. Yeah, I mean, like Kubernetes, like that point about Kubernetes, I will say as a former AWS employee, like it seems like it would be ideal for imbue to at some point make it more abstracted or agnostic because you're going to want to, you know, replicate your setup. We do have our ownJOSH [00:34:19]: sort of replacement. It's just a much simpler version of Kubernetes. Kubernetes is really designed for running services, not for running experiments. Like that's not its like main architecture. And so for us, like we have everything that's like, cool, you're going to run an experiment. So you want it to run to completion, right?SWYX [00:34:34]: OK, great.JOSH [00:34:34]: Like the primitives are sort of built around a slightly different style. And that makes it a lot easier, like just a lot simpler to fit that the nature of like these machines are going to disappear. They will need to be rebooted for infrastructure upgrades. They will like something will happen to the GPUs. Failure is like baked into this as like a core part of our infrastructure. So it's not that we don't have an abstraction. It's that it's a sort of simpler, more tailored abstraction for the particular work that we're doing.JONATHAN [00:34:58]: Yeah, I think it all depends on what your goals are. And like, I think the challenge in a lot of the deep learning stuff right now is that people are trying to like, people often build things that are more complicated than necessary to get the job done. And the complication is the enemy of everything. You know, don't use a fancier parallelism strategy than you have to. Don't use a fancier set of libraries than you have to.SWYX [00:35:18]: Don't do anythingJONATHAN [00:35:18]: that you don't have to do because it's hard enough as it is. Like, don't overcomplicateSWYX [00:35:23]: your own life.JONATHAN [00:35:23]: Don't try to bring in more tools or more fancy architecture tweaks if you absolutely don't have to.SWYX [00:35:29]: Like getting to the minimumJONATHAN [00:35:30]: necessary to get the job done. And it's really tempting to want to try to use everything. So like, I totally understand that one.SWYX [00:35:37]: I think the last piece I'll maybe call out is that I'm just going to weave this in just because I see the opportunity to do it. Are there any infrastructure shifts that need to be, that need to rise because of changing architecture? So I think, for example,SWYX [00:35:57]: you're announcing a dense model, a 70B dense model, whereas John just worked on DBRX and the image-to-text model, which presumably has different bottlenecks.JONATHAN [00:36:10]: That's correct for us. You know, we train both dense and mixture of expert models. The one we happened to, you know, kind of get permission to open source was a mixture of expert model. And those models are very demanding when it comes to network bandwidth, at least if you're training them in kind of FSTP 03 style, where there's just a lot of parameters getting shuffled back and forth. And your ratio of kind of compute to amount of data that you have to shuffle back and forth becomes a lot worse because you're now, you know, you're only using a fraction of the parameters for every token instead of all the parameters. And so we had to really push the envelope on getting all the stuff to the right places on time. And so actually the networking part of DBRX was the single hardest thing, I think, of the entire process. Just get MOE training, working at scale across a big cluster. We still managed to, I think, do it all with commodity parts, which was very exciting. You know, we were using FSTP and we eventually used HSTP so that we could have HSTP as a version of FSTP where you have multiple smaller replicas and you're doing data parallel within those replicas. And that helped a lot with network latency issues that we were running into just because we were transmitting so much data, you know, for every single part of the process. I think it actually, like, it was instructive for how Google designs their hardware and software together personally. Their training, as far as I understand, using kind of a 03 style of training and have been for a while. They also train mixture of expert models. TPUs have a very different network bandwidth to compute ratio. They have a lot more bandwidth just objectively. And TPUs per chip tend to be a little bit less compute intensive and have a little bit less memory. You know, it's just a different design choice. So the ratio of flops to bandwidth is very different. And that means that it's much easier for Google to be able to pull offSWYX [00:37:54]: some of this stuff.JONATHAN [00:37:54]: They also have interesting, you know, Torus style network architecture or Torus style, like, literal network architectureSWYX [00:38:00]: is not like the model,JONATHAN [00:38:00]: but the network.SWYX [00:38:02]: Is this the sort of block attention? I forgot what you call it. So this is just more or the,JONATHAN [00:38:07]: yeah, this is more, not the ring attention, but these are the ring all reduces. Like you have three different dimensions of rings because they kind of put you in these three dimensional Toruses from what I understand. And so like, you know, Google's infrastructure in some sense is kind of, I wouldn't say built for this, but maybe the way that Google trains models is built for a slightly different bit of infrastructure they have. And it's kind of neat to think about that. You know, as one thing that I think NVIDIA announced for, you know, for, for both the GH200 and the GB200 is this hybrid networking where you'll have blocks of NVLink network chips. I think for the GB200, I think it's like groups of 72 GPUs will all have NVLink to each other. So higher bandwidth, then you'll have normal networking of some kind, InfiniBand or Rocky or what have you between these blocks. And that's kind of a, you know, it's a change due to the fact that, you know, it's hard to build really high bandwidth networks over very large groups, but it is now a blocked networking. And you have to think about how you architect your model and your parallelism differently. You also have to think about fault tolerance differently because it now matters where you lose a GPU, whereas it didn't before. So, you know, it's, it's, it's just all really interesting and really fun speaking personally, but it's going to mean new nightmares when we all move to that generation and have to think about, you know, new versions of these problems.JOSH [00:39:20]: As you go up to larger scales, it gets quite different. Like right now, you know, if you're experiencing, let's say, for example, you experience a GPU failure every day, that's fine.SWYX [00:39:31]: Just restart.JOSH [00:39:31]: If you make your thing 24 times as big, now it's once an hour. Now it stops being quite as easy to just restart, right? So now you have to kind of break, like bake in this sort of redundancy that you didn't have before. So I think as you go up in scale, you end up running into like a lot of really interesting problems that also inform the, the actual like design. Yeah, I mean, as an orchestration guy,SWYX [00:39:52]: this is why I always emphasize like very cheap storage or very fast storage. So you can checkpoint more, but I don't think that's probably not the best solution to for fast, you know, training.JONATHAN [00:40:05]: Which works fine when you're doing language and then you move to vision or video. And then, you know, you have multi petabyte datasetsSWYX [00:40:12]: and getting, you know,JONATHAN [00:40:13]: cheap, fast multi petabyte storage starts to bite. Like I've certainly encountered issues where the literal data center where my GPUs were did not have enough, you know, object store to fit the datasets that people wanted to bring into that data center from whichever users were, were trying to bring them in. And then you get to a wholeSWYX [00:40:31]: different world of hurtJONATHAN [00:40:31]: where you have to keep your data in a different region because the region is just out of storage. So things get fun really fast.SWYX [00:40:39]: Speaking of vision, Josh, actually, you know, Embu is an agents company, but you're only, you're announcing a text-only model. What, where does, where does the vision side come in?JOSH [00:40:49]: I think we've actually done a lot of work in the past and people can see kind of our blog posts about sort of self-supervised learning and some other kind of vision-related stuff in the past as well. So we're very familiar with, with that stuff. But I think our main focus right now is on kind of, as we say, coding and reasoning. And there, there's certainly a visual component to some problems. But, you know, it's not necessarily required for all problems. And actually we found that for most of the kind of like code writing and, and reasoning problems that we care about, the visual part isn't really a huge important part of it. Sometimes if you really need to, you can maybe describeSWYX [00:41:24]: the thing.JOSH [00:41:24]: There are other like, you know, multimodal models that you can use off the shelf to sort of plug in for those particular piecesSWYX [00:41:30]: that you need, right?JOSH [00:41:30]: Like if something is driving a browser or whatever, like you can sometimes get away with not having to have that baked into the original model. So our folk were, you know, in a sense, we kind of do a lot across the stack. We're working on our own infrastructure and pre-training and RL and fine tuning and products and everything. But in another sense, we're very narrowly focused on the application side. So all of the stuff across the stack is kind of going toward a very particular purpose. And so that particular purpose right now doesn't really need vision. So we think that people are going to make all sorts of really cool image modelsSWYX [00:42:00]: like Jonathan, right?JOSH [00:42:00]: And all sorts of interesting multimodal models into the future. We'll let them go do that. That's great. We'll take advantage of that, partner with those people in the future. And right now we're really focused on kind of the core reasoning and coding capabilities and aspects of the model.SWYX [00:42:14]: I wanted to go into carbs since that's kind of the next layer of the stack. We talked about carbs in the first episode with Kanjin because you've actually had a blog post about it like a couple of years ago. Maybe let's introduce it.JONATHAN [00:42:26]: Has that been a couple of years now?JOSH [00:42:28]: No, it must have been at least one year. Hopefully it's not multiple years.SWYX [00:42:32]: Sorry, I'm counting AI time. Yeah, yeah. Yeah, I was going to sayJONATHAN [00:42:35]: you're making me feel really old right now.SWYX [00:42:39]: I count everything before the generally intelligent rename as like, you know, prehistory. Yeah. And now sort of modernity, right? So I actually thought carbs was more about hyperparameter optimization in a sense of like sort of parameters, hyperparameter search. Whereas, you know, when you introduced it, especially in this blog post, it's more about scaling laws and predictability of like, are we sort of in the right ballpark before we scale things up? Maybe sort of recount the history of carbs.JOSH [00:43:10]: Yeah, so it really is a little bit of both. So carbs is, it's maybe a backronym, but it's for cost aware Pareto region Bayesian search. So this is about technically how it works, but carbs is like, you know, we like pastries and stuff.SWYX [00:43:26]: So great, why not? But the point is thatJOSH [00:43:29]: it's a cost aware hyperparameter tuner. So most hyperparameter tuners, you kind of say, OK, here's this objective function. I want you to make this number as big as possible or as small as possible, whichever direction you want to go. So yeah, just go make this number, you know, as small as possible. OK, so it'll try a bunch of differentSWYX [00:43:46]: hyperparameters,JOSH [00:43:46]: a bunch of different configurationsSWYX [00:43:48]: to figure out, like,JOSH [00:43:48]: how do I tweak your network and architecture, et cetera, to get the kind of best performance I possibly can. That's usually saying, like, you know, almost all of these hyperparameter configurations are, let's say they're all going to use the same number of GPUs or the same number of nodes.SWYX [00:44:01]: So it's going to runJOSH [00:44:01]: for the same amount of time.SWYX [00:44:03]: So you can do that.JOSH [00:44:03]: You can get a number out and that's great. But what carbs does is it says,SWYX [00:44:07]: OK, actually,JOSH [00:44:07]: what if we relax that constraint? What if we say each of these different points, we're going to model how expensive it will be to sample this configuration. So if what if we train with just one one hundredth of the data? Like, how well can we do?SWYX [00:44:19]: What if we trainJOSH [00:44:19]: with one tenth of the data? What if we train with all the data? That way you can understand, like, as we get more and more data, as we spend more and more compute,SWYX [00:44:26]: as we make a biggerJOSH [00:44:26]: and bigger network, how does performance change with these things that change? Like how expensive it is to even explore this data point. So by doing that, we can see the scaling laws for not just, you know,SWYX [00:44:36]: the scaling lawsJOSH [00:44:36]: from like the, you know, Chantilla paper, the scaling laws for all parameters. We can see how does how does the number of layers change with this? How does the, you know, the learning rate change? How do the like, you know, various types of regularization change? So you can see these nice scaling laws. And as you're going across costs, like how should this be changing as you're scaling up your model? So that, coupled with the kind of metric that we chose, which is a very precise way of measuring performance, allowed us to really like hone in on parameters that worked really wellSWYX [00:45:05]: and understand, like,JOSH [00:45:05]: how do we want to scale those up, especially as we're changingSWYX [00:45:08]: things about the network?JOSH [00:45:08]: Like one of the things that we did is we used a custom tokenizer. As we change this tokenizer, changes a bunch of other things about the model. So how should we scale up this entirely new tokenizer? Like no one has ever made a model this large with this tokenizer before. And so how do we want toSWYX [00:45:22]: change all these things?JOSH [00:45:22]: Harps kind of shows you, like, look, as you change these parameters, like these other ones are kind of dependent on this.SWYX [00:45:28]: Like this is the, these areJOSH [00:45:28]: the relationships between them. So you can better understand, like, OK, if I'm going to scale this up 10x or 100x, like, where do I want to be? I can only go so far. And so, you know, we did run, like, I think maybe it was like a 14b one or somethingSWYX [00:45:40]: like that to check.JOSH [00:45:41]: But and so we had a bunch of like 1b or 14b and then at 70b. I don't think we had a, I think we just did like one at 14b. So you can, we get to check that like, oh, is this on the curve? Like, is this where we expect? It was like right there. So then great, go on to the next one. Yeah, I mean, that makes a lot of sense.SWYX [00:45:56]: I wonder if, so one of the key questions, and correct me if I'm wrong, but like usually people do search or do their evals just based on loss. But you actually evaluate based on, you know, the sort of end state evals that people might expect, like HellaSwag and Lombata, whatever. What is the norm here? Is there a norm?JOSH [00:46:20]: Yeah, I don't know if there's a hundred percent.SWYX [00:46:21]: I don't know. I only see loss on most people's reports.JOSH [00:46:25]: I think it's easy to, like, loss is very nice because it's very precise. It will tell you, like, very fine grained differences between like really small changes in your hyperparameters or network architecture. Whereas, especially at the smaller scales, if you're looking at like accuracy, it's very noisy. Like it might be zero or a hundred or like, you know, fluctuating by like 10 or 20 percentage points, which makes it really hard to tell, like, did that change actually mean anything? So our loss is sort of a combination of these two. Instead of saying, like, let's just look at perplexity, we say, let's look at perplexity on the tasks that we care about for multiple choice questions effectively.SWYX [00:47:00]: So we're saying like, yes,JOSH [00:47:00]: this is formulated as a multiple choice question, and we're going to look at the, like, you know, the loss of perplexity for this particular answer token. And that ends up being something that's like both targeted to what you actually care about and also very precise. The nice thing about this though is that it's independent of the data that you train on. One thing that's annoying about perplexity or about loss is that as you change your data set, this is really obnoxious because now it fundamentally changes your loss, right? And so you can't tell, like, how do I tweak my data set? But because we have this held out evaluation dat
Tyrannosaurus Rex, Brontosaurus, Velociraptor, Triceratops… all fake?!? No way?! It can't be?! What about all of the fossils they've found over the past 150+ years? Why would the Smithsonian and other institutions ever promote an extinct animal species that never really existed? What did they have to gain? We all grew up learning about the amazing, wonderful and terrifying world of the dinosaur. Giant reptiles that roamed the earth 25-65 million years ago and have been extinct for as long. Geez! How did their bones survive such a long period of time and stay fully intact? Why do we never find an entire dinosaur but instead just a few bones here and there… but we've filled in the gaps with such detail? Why do museums all over the world never show the actual bones but replicas of the ‘real thing' and the real bones are tucked away somewhere else? Why is there nothing recorded about dinosaurs before 1842?And what about ‘Giants'? Supersized people that may have existed in many different cultures all over the globe in various time periods that ranged from 7 to over 10 feet tall? Is everyone just telling fairy tales and stories or did extra-large people or human-like creatures once walk the planet?These are all great questions that the Tin Foil Hat Club will investigate in the 2nd Conspiracy Theory episode here on the Strong By Design podcast show! And we just scratched the surface with our third topic of debate… to be continued!"Once you accept the premise, your mind is then trying to create truth from that premise." — Stephen OhocinskiTime Stamps 00:37 - Welcome to the 'Strong by Design' Podcast 01:01 - Get to know today's special guests, the Tin Foil Hat Club 04:07 - Last time with the Tin Foil Hat Club: Popular conspiracy recap! 06:30 - How stories change the world 08:15 - Conspiracy theory #1: Dinosaurs never existed? 12:01 - How what you believe shapes what you conspire 13:41 - Meet the man who invented the dinosaur 24:41 - Cui Bono? Exploring the motives behind dinosaur fakery 35:19 - 'Is Genesis History?': How science connects to the Bible 36:00 - The Tin Foil Hat Club's take on reimagining a dinosaur-free world 39:53 - Conspiracy theory #2: Were giant humans real? 48:44 - How the society benefits from hoaxing giant people 56:10 - Understanding the 'Slow Drip' theory 57:47 - Conspiracy theory #3: The unsolved mystery of JonBenét Ramsey… to be continued Resources:Got Questions or other good topics? Email us at strongbydesignpodcast@gmail.com Support the Show.Connect w/ CriticalBench: Youtube Facebook Instagram CriticalBench.com StrongByDesignPodcast.com
How many times have you asked yourself: which is better, Sanden or a Velociraptor? On the one hand, Velociraptors hunt in packs, but on the other hand, Sanden Totten is funny. Sanden has great hair, but Velociraptors have been featured in several movies. It's impossible to decide. Well, grab your Smarty Pass because with the help of a fun game we'll decide which is best Sanden or a Velociraptor!
In this Meaties ep, Lace reveals the sad truth she learned about Velociraptors and Katherine describes one the greatest weekends of her life while touring w/ Bert Kreischer! FOLLOW US ON IG: CHEATIES PODCAST | Lace Larrabee | Katherine Blanford SHOP FOR GIGGLE GLOSS HERE HAVE YOU CHEATED, BEEN CHEATED ON OR BEEN A SIDEPIECE IN A RELATIONSHIP? CALL TO LEAVE A VOICEMAIL TEASING YOUR STORY & YOU MIGHT JUST END UP ON AN EPISODE OF CHEATIES! 888-STABBY-8 (888-782-2298) Learn more about your ad choices. Visit megaphone.fm/adchoices
This group of predatory dinosaurs includes such famous names as Deinonychus, Microraptor, and Velociraptor, and they're among the most well-studied and popular dinosaurs of all time. This episode, we'll discuss what sets these dinosaurs apart, as well the much-discussed and -debated questions surrounding their relationships to birds, their distinctive claws and wings, and their hunting strategies. In the news: ant-mimic spiders, fishapod spinal column, early dinosaur growth, and a fossil tapeworm. Time markers: Intro & Announcements: 00:00:00 News: 00:05:20 Main discussion, Part 1: 00:33:00 Main discussion, Part 2: 01:13:50 Patron question: 02:22:05 Check out our website for this episode's blog post and more: http://commondescentpodcast.com/ Links mentioned in the announcements: Palestine Children's Relief Fund: https://www.pcrf.net/ Jewish Voice For Peace: https://www.jewishvoiceforpeace.org/take-action/ Join us on Patreon to support the podcast and enjoy bonus content: https://www.patreon.com/commondescentpodcast Got a topic you want to hear about? Submit your episode request here: https://commondescentpodcast.com/request-a-topic/ Lots more ways to connect with us: https://linktr.ee/common_descent The Intro and Outro music is “On the Origin of Species” by Protodome. More music like this at http://ocremix.org Musical Interludes are "Professor Umlaut" by Kevin MacLeod (incompetech.com). Licensed under Creative Commons: By Attribution 3.0 http://creativecommons.org/licenses/by/3.0