Podcasts about Cutlass

Short sword used by sailors on sailing ships

  • 148PODCASTS
  • 407EPISODES
  • 1h 4mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 2, 2025LATEST
Cutlass

POPULARITY

20172018201920202021202220232024


Best podcasts about Cutlass

Latest podcast episodes about Cutlass

EH?
Cutlass and Kas Kas

EH?

Play Episode Listen Later May 2, 2025 20:25


The cutlass has had a long history in Jamaica from the days of the Bucaneers to the days when sugar was king. There are 6 names for cutlass in Jamaica, and in the summer of 1985 in Big 9, I learned everything I needed to know about the cutlass.

Best Rapper In L.A.
Ep #82 "67 Cutlass Continued"

Best Rapper In L.A.

Play Episode Listen Later Apr 3, 2025 33:48


In this third episode, covering "Love & Rockets Vol. 1: The Transformation," Murs continues his breakdown of "67 Cutlass," the Ski Beatz-produced banger that continues his long lineage of great story-telling songs.Stream 3 brand new singles from the final album in this trilogy: "Love & Rockets 3:16 (The Emancipation)"https://ffm.to/murs316Buy "Love & Rockets 3:16 (The Emancipation)" now on vinyl tape and CD:https://www.mellomusicgroup.com/collections/mursSEE MURS ON TOUR! Buy Tickets for The Last Run Tour:https://www.murs316.com/event-listListen to the album on Spotify:https://open.spotify.com/album/6DUdy2eiuji8JlurLj8mgEListen to the album on Apple Music:https://music.apple.com/us/album/the-final-adventure/575079049Support the podcast to get exclusive episodes and BRILA merch here:https://www.patreon.com/Murs316Follow us on IG:https://www.instagram.com/brilapod/Tune into Mondaze with Murs on Twitch:https://www.twitch.tv/3point5Check out Murs "Daddytron" Video Mix:https://www.youtube.com/watch?v=L2edzhPUZ1M Hosted on Acast. See acast.com/privacy for more information.

Best Rapper In L.A.
Ep #81 "67 Cutlass" & Other Tales From The SouthWest

Best Rapper In L.A.

Play Episode Listen Later Mar 27, 2025 52:20


In this second episode covering Love & Rockets Vol. 1: The Transformation, Murs reflects on his current tour life and outlines some of his most impactful experiences in the great American Southwest, where the fictional story of his fan favorite, Ski Beatz-produced song "67 Cutlass" is set.Stream the brand new single "Silverlake Rec League":https://ffm.to/murs316SEE MURS ON TOUR! Buy Tickets for The Last Run Tour:https://www.murs316.com/event-listListen to the album on Spotify:https://open.spotify.com/album/6DUdy2eiuji8JlurLj8mgEListen to the album on Apple Music:https://music.apple.com/us/album/the-final-adventure/575079049Support the podcast to get exclusive episodes and BRILA merch here:https://www.patreon.com/Murs316Follow us on IG:https://www.instagram.com/brilapod/Tune into Mondaze with Murs on Twitch:https://www.twitch.tv/3point5Check out Murs "Daddytron" Video Mix:https://www.youtube.com/watch?v=L2edzhPUZ1M Hosted on Acast. See acast.com/privacy for more information.

Brett’s Old Time Radio Show
Brett's Old Time Radio Show Episode 823, Dangerous Assignment, Kroner Cutlass

Brett’s Old Time Radio Show

Play Episode Listen Later Feb 5, 2025 32:14


  Hello, I'm Brett and I'll be your host for these amazing Old Time Radio Shows :) Dangerous Assignment was a thrilling NBC radio drama that captivated audiences from 1949 to 1953, starring the dynamic Brian Donlevy as the fearless U.S. special agent Steve Mitchell. It was broadcast across a range of media, including a syndicated TV series in 1951–52, and even inspired a reimagined Australian radio version from 1954 to 1956. Both the radio and TV series kept viewers on the edge of their seats with fast-paced plots filled with espionage, deception, and international intrigue. Series Premise: Each episode followed Steve Mitchell, an American agent dispatched by "The Commissioner," the enigmatic head of an unnamed U.S. State Department division. Steve's mission: to travel to exotic locations around the world to foil nefarious plots and uncover dangerous secrets. The show was designed to keep listeners in suspense, opening with a tantalizing scene before the action unfolded. Mitchell, posing as a suave foreign correspondent for an unspecified publication, navigated a maze of lies, betrayal, and violence—always emerging victorious by the end of the episode. Origins and Evolution: Dangerous Assignment was originally conceived as a summer replacement series for NBC in 1949. It quickly gained popularity, and its success led to a full radio series running until 1953. Brian Donlevy, who also narrated the show, brought an intense realism to his portrayal of Steve Mitchell, which contributed to the show's gripping atmosphere. The only other consistent voice on the radio version was Herb Butterfield, who played "The Commissioner." Guest stars included famous actors like Raymond Burr, William Conrad, and Richard Boone, each lending their talents to create a unique cast of characters across the episodes. After the American radio series concluded, Dangerous Assignment continued its journey abroad with a 1954 Australian radio adaptation. This version used remade American scripts and introduced Lloyd Burrell as Steve Mitchell, broadcasting a total of 39 episodes. The 1949 Summer Series: Dangerous Assignment first aired as a seven-week summer series in the summer of 1949, running on Saturdays from 8:30 to 9:00 PM EST. The character of Ruthie, the Commissioner's secretary, was played by Betty Moran, hinting at a possible romantic backstory with Steve Mitchell. The show's first episodes took listeners on adventures to locations like Messina, Sicily, Saigon, and Paris, where Steve investigated everything from stolen relief supplies to millionaire murder conspiracies. The 1950–1953 Radio Run: The show's popularity ensured its return to the airwaves in February 1950, although it faced some scheduling challenges. Over the next few years, Dangerous Assignment moved through various time slots, ultimately running for over 160 episodes. The radio series also attracted major sponsors, including Ford Motor Company, Wheaties, and Anacin, though it was largely supported by NBC itself. The episodes became more formulaic, often starting with Steve Mitchell being assigned a mission—usually involving espionage, sabotage, or international political conflict—followed by thrilling encounters with dangerous enemies. Syndicated Television Version (1951–1952): In 1951, Donlevy adapted the series into a syndicated television show. Rather than relying on a traditional TV network, Donlevy self-financed the production of 39 episodes, selling them individually to local stations across the country. This approach, aided by NBC's distribution assistance, allowed the show to reach a wide audience despite limited network support. Each episode remained faithful to the original radio scripts, with Donlevy reprising his role as Steve Mitchell and Herb Butterfield again playing "The Commissioner." Production Team and Legacy: The television version of Dangerous Assignment employed a talented team behind the scenes, including assistant director William McGarry, production designer George Van Marter, and film editor Edward Schroeder, A.C.E. The show's episodes were often fast-paced, with each story revolving around Mitchell's covert operations in places as diverse as Paris, Berlin, and the African jungle. Among the famous guest stars featured in the TV series were Hugh Beaumont, Paul Frees, and Michael Ansara, who appeared as a variety of different characters throughout the series. Notable episodes included titles like "The Alien Smuggler Story" and "The Atomic Mine Story," where Steve Mitchell faced off against spies, criminals, and saboteurs in a constant battle to protect U.S. interests overseas. The Man Behind the Character: Brian Donlevy: Brian Donlevy, born in Cleveland, Ohio, on February 9, 1901, was known for his tough, no-nonsense persona, both on screen and on the airwaves. With a career that spanned film, radio, and television, Donlevy brought a unique depth to his portrayal of Steve Mitchell. He was a familiar face in 1940s Hollywood, starring in classic films like Beau Geste (1939) and Wake Island (1942), and even earned an Academy Award nomination for his role in Beau Geste. In addition to his success in film, Donlevy was a major figure in the development of Dangerous Assignment, both as the star and as a key producer for the television adaptation. His tough-guy image made him a natural fit for the role of the action-packed American agent, and he remained a popular figure in postwar television, contributing to numerous anthology series like Kraft Theatre and Lux Video Theatre. Conclusion: Dangerous Assignment remains a notable chapter in both radio and television history. The series was a standout example of 1940s and 1950s action-adventure storytelling, blending espionage, drama, and international intrigue. Thanks to Brian Donlevy's magnetic performance, Dangerous Assignment continues to be remembered as a thrilling and influential series that helped set the stage for future espionage-themed shows and films.

Force Insensitive - A Star Wars Podcast
S5E30: Eggplant Cutlass Supreme

Force Insensitive - A Star Wars Podcast

Play Episode Listen Later Jan 2, 2025 103:14


Sometimes a show can be short on plot but still make a massive impact in the overall narrative tapestry. That was very much the case in Skeleton Crew: S01E06: Zero Friends Again. There were some beautiful character beats that drew us closer than ever to the team as they near a showdown with the pirate crew on At Attin. We dig in and give you some insight into the character's real world parallels and discuss the return of a well loved Star Wars contributor behind the scenes. Turn up your headphones, dial back your sensibilities, and join the wretched hive of scum and villainy as we take the low road to resistance on Season Five, Episode Thirty of Force Insensitive!Send Email/Voicemail: mailto:forceinsensitive@gmail.comDirect Voice Message: https://www.speakpipe.com/ForceInsensitiveStart your own podcast: https://www.buzzsprout.com/?referrer_id=386Use our Amazon link: http://amzn.to/2CTdZzKFB Group: https://www.facebook.com/groups/ForceInsensitive/Twitter: http://twitter.com/ForceNSensitiveFacebook: http://facebook.com/ForceInsensitiveInstagram: http://instagram.com/ForceInsensitive

Hank Watson's Garage Hour podcast
09.23.24: Knee-Voltage of Your Own Competence - It's the Lack-of-Skills Special (Every Day's an IQ Test!), w/ The Carl on the Phone, Distracted Driving, Being Your 100&%, + Chainsaws, Kyuss, Marylin Manson & Cutlass Burnouts

Hank Watson's Garage Hour podcast

Play Episode Listen Later Dec 14, 2024 50:06


It's tired and we're late.  No, wait a minute...  Okay, that's right.  There's much afoot at the Circle J, and the Garage Hour goons have another insightful batch of geekbrain excellence for you: the other side of the coin of our skillset episode a few weeks back, thanks to a gal in the #3 lane who couldn't look up from her Distractomatic 5000 long enough to not drive into the bumper of the truck in front of her.  Don't be the anti-inspiration for our incompetence episode (and don't be a Carl).  There's also guidance on measuring up to 100% you, avoiding the shallow end of the tool pool, using your head to avoid the obvious, and trying to use your capabilities once in a while instead of sucking all the time. Moving beyond the need for being the best you you can do (coocoo k'choo), there's insight on more electric car fails by the OEs (begging the G for good money to keep spending on bad ones), wildebeests and crocodiles and educational television (back when it was worth a beer), and steering clear of incompetence multipliers (and bent pliers).

Hank Watson's Garage Hour podcast
09.23.24 (MP3): Knee-Voltage of Your Own Competence - It's the Lack-of-Skills Special (Every Day's an IQ Test!), w/ The Carl on the Phone, Distracted Driving, Being Your 100&%, + Chainsaws, Kyuss, Marylin Manson & Cutlass Burnouts

Hank Watson's Garage Hour podcast

Play Episode Listen Later Dec 14, 2024 50:06


It's tired and we're late.  No, wait a minute...  Okay, that's right.  There's much afoot at the Circle J, and the Garage Hour goons have another insightful batch of geekbrain excellence for you: the other side of the coin of our skillset episode a few weeks back, thanks to a gal in the #3 lane who couldn't look up from her Distractomatic 5000 long enough to not drive into the bumper of the truck in front of her.  Don't be the anti-inspiration for our incompetence episode (and don't be a Carl).  There's also guidance on measuring up to 100% you, avoiding the shallow end of the tool pool, using your head to avoid the obvious, and trying to use your capabilities once in a while instead of sucking all the time. Moving beyond the need for being the best you you can do (coocoo k'choo), there's insight on more electric car fails by the OEs (begging the G for good money to keep spending on bad ones), wildebeests and crocodiles and educational television (back when it was worth a beer), and steering clear of incompetence multipliers (and bent pliers).

KASIEBO IS NAKET
One Dead, Another Injured After Unknown Gunmen Inflict Cutlass Wounds on Victims in Tuntumba

KASIEBO IS NAKET

Play Episode Listen Later Oct 17, 2024 50:32


One person named Bushira Shaibu, believed to be in his 20s, has met an untimely death after unknown assailants attacked and killed him in Tuntumba, a farming community in the Bole district of the Savannah region

To All The Cars I’ve Loved Before
Tom - 1972 Oldmobile Cutlass, 2005 Chevy Cavalier, 2015 Chevy Silverado + Life Lessons, Tragedy and Resilience, and Past Experiences Shaping the Present

To All The Cars I’ve Loved Before

Play Episode Listen Later Oct 1, 2024 29:04 Transcription Available


Click here to send a text to Christian and DougWhat happens when you trade in your car for an US aircraft carrier? Listen as our latest guest, Tom, tells us the answer to this and other questions including why he prefers silver cars, how a car can be used diaper changing table in a pinch, and what it was like to help rebuild New Orleans after Hurricane Katrina.This episode takes a poignant turn as we explore stories of loss, resilience, and redemption. From the devastating loss of his childhood home and his first car to a fire, to navigating life without a car during his Navy years, Tom emphasizes how he learned the importance of living in the present and letting go of material attachments. We also delve into a transformative journey through struggles with addiction, homelessness, and the aftermath of Hurricane Katrina, highlighting personal growth and unexpected opportunities. Tune in for a deep, emotional conversation that reveals how our past experiences shape who we are today and offers a unique perspective on rebuilding and resilience.

Marooned
The Marooning of Philip Ashton

Marooned

Play Episode Listen Later Sep 18, 2024 38:54


Maroon - to leave someone trapped and isolated in an inaccessible place, especially an island. Admittedly, stories that closely fit that definition are more difficult to come by than we originally thought. This episode, however, fits the bill.   Resources: Book ~ Fleming, Gregory N. At the Point of a Cutlass. 2014   Hey! Does anybody ever read the show notes? If so, please rate and review Marooned so that Jack & Aaron aren't lost at sea. Thank you.    

The Board Game BBQ Podcast
Episode 310: Magic: The Gathering Bloomburrow, Umbrella, Dungeons & Dragons

The Board Game BBQ Podcast

Play Episode Listen Later Sep 10, 2024 83:03


In this week's episode, Conor, Jules and Joe share what has been hitting their tables. Jules gives us a blast from the past revisiting his humble beginnings with Magic: The Gathering. Joe has been enjoying his latest cup of tea rainy day game in Umbrella, though some of the strategy went over his head. Conor takes the best of both, stirring up storms with an even bigger throwback to the iconic PRG, Dungeons & Dragons.  There's also the Question of the Pod, what has us fired up lately, and all the regular podcast shenanigans.  New Question of the Pod: What are your thoughts on the popularity of RPGs compared to board games? Timestamps: Magic: The Gathering Bloomburrow [34:40] Umbrella [46:58] Dungeons & Dragons [0:52:20] Question of the Pod Recap: [1:08:12] What has us fired up? [1:15:26] Check out our Eventbrite page for all of our upcoming Game Days: https://www.eventbrite.com.au/o/board-game-bbq-32833304483  Has this episode left you with a thirst for more? Here are all of the games that we discussed: Art Society - https://boardgamegeek.com/boardgame/395375/art-society  Linx - https://boardgamegeek.com/boardgame/399757/linx  Cutlass - https://boardgamegeek.com/boardgame/232588/cutlass  Unsettled - https://boardgamegeek.com/boardgame/290484/unsettled  Rebel Princess - https://boardgamegeek.com/boardgame/381249/rebel-princess  Magic: The Gathering Bloomburrow - https://boardgamegeek.com/boardgame/425700/magic-the-gathering-bloomburrow  Umbrella - https://boardgamegeek.com/boardgame/402125/umbrella  SPONSORS Our podcast is proudly sponsored by Advent Games. Advent Games (http://www.adventgames.com.au/) are an Australian online board game store based in Sydney, NSW. Their core values are integrity, customer satisfaction, and providing a wide range of products including those hard-to-find board games. PATREON Hey there, BBQ fans! Guess what? We've got a Patreon! By joining, you'll unlock exclusive content, gain access to a members-only section of our Discord where you can help shape the show, and so much more. Plus, your support will help us grow and bring some awesome new projects to life in 2024. At the Board Game BBQ Podcast, we're passionate about what we do and promise to keep the fun and shenanigans rolling. We're so grateful for your support! Joining our Patreon is totally optional, and we ask that you don't contribute if it'll cause financial stress. But if you'd like to chip in from just USD$5 a month, click the link to check out our Patreon page. Thanks a million for being amazing! We're committed to creating a welcoming and inclusive community, and you all make it special. See you at the BBQ!! https://www.patreon.com/BoardGameBBQ  SOCIALS Support the podcast and join the community! https://linktr.ee/BoardGameBBQ 

High Octane Hustle
E39 Georjah Erin, Lowrider Cutlass, Mini Truck Build, Modeling and Hoonigans

High Octane Hustle

Play Episode Listen Later Sep 2, 2024 61:09


Hosted by Fast Lane Jane Thurmond and Design Muse Theresa Contreras Georjah joins us to talk about her journey in lowriding in Canada to modeling at shows and magazine all across the US. Soon she brought out a beauty of her own with her lowrider Cutlass. Now on adventures with the Hoonigans and Hoppo's to build a lowrider custom inspired Toyota mini truck for SEMA. Checkout Georjah's Toyota mini truck build @georjah. Follow Georjah on Instagram @georjah_erin on TikTok @georjah_ltd and on www.georjah.com. Download the Tinker DIY app to get one-on-one help from vetted professions and ASE certified mechanics to help with your automotive work. www.tinkerdiy.com Apple App: https://tinkerdiy.onelink.me/pA75/websitebody Google App: https://tinkerdiy.onelink.me/pA75/websitebody Produced by Auto Revolution Auto Revolution produces automotive TV Shows,  Podcasts, Promotional Videos, and more. Watch at www.autorevolution.tv and follow @autorevolution Recorded at Autotopia LA The premier automobile storage & concierge facility in Los Angeles. Collector car storage. Vintage car storage. Luxury car storage. Exotic car storage. Follow Autotopia LA @autotopiala and www.autotopiala.com Baja Forged Timeless design. Race inspired. BAJA proven! We love looking good driving on and off road.  Baja Forged offers products to be capable when we need them. So we built Baja Forged. Follow Baja Forged at @bajaforged and www.bajaforged.com GTS Customs Corvette specialists, GTS Customs sets the highest standards for custom fab & body work, outrageous paint, complete builds and restomods. Follow GTS Customs at @gtscustoms and www.gtscustoms.com #podcast #carpodcastE39 – Georjah Erin, Lowrider Cutlass, Mini Truck Build, Modeling and Hoonigans

High Octane Hustle
E39 Georjah Erin, Lowrider Cutlass, Mini Truck Build, Modeling and Hoonigans

High Octane Hustle

Play Episode Listen Later Sep 2, 2024 61:08


Hosted by Fast Lane Jane Thurmond and Design Muse Theresa Contreras Georjah joins us to talk about her journey in lowriding in Canada to modeling at shows and magazine all across the US. Soon she brought out a beauty of her own with her lowrider Cutlass. Now on adventures with the Hoonigans and Hoppo's to build a lowrider custom inspired Toyota mini truck for SEMA. Checkout Georjah's Toyota mini truck build @georjah. Follow Georjah on Instagram @georjah_erin on TikTok @georjah_ltd and on www.georjah.com. Download the Tinker DIY app to get one-on-one help from vetted professions and ASE certified mechanics to help with your automotive work. www.tinkerdiy.com Apple App: https://tinkerdiy.onelink.me/pA75/websitebody Google App: https://tinkerdiy.onelink.me/pA75/websitebody Produced by Auto Revolution Auto Revolution produces automotive TV Shows,  Podcasts, Promotional Videos, and more. Watch at www.autorevolution.tv and follow @autorevolution Recorded at Autotopia LA The premier automobile storage & concierge facility in Los Angeles. Collector car storage. Vintage car storage. Luxury car storage. Exotic car storage. Follow Autotopia LA @autotopiala and www.autotopiala.com Baja Forged Timeless design. Race inspired. BAJA proven! We love looking good driving on and off road.  Baja Forged offers products to be capable when we need them. So we built Baja Forged. Follow Baja Forged at @bajaforged and www.bajaforged.com GTS Customs Corvette specialists, GTS Customs sets the highest standards for custom fab & body work, outrageous paint, complete builds and restomods. Follow GTS Customs at @gtscustoms and www.gtscustoms.com #podcast #carpodcastE39 – Georjah Erin, Lowrider Cutlass, Mini Truck Build, Modeling and Hoonigans

Jemjammer
Side Story 3 - "Dance With Me"

Jemjammer

Play Episode Listen Later Sep 1, 2024 63:52


Hi everyone! We've had a bit of a scheduling conflict and need to take a skip week, so instead we're releasing the final Jemjammer flashback episode early. Enjoy!As our heroes' reunion with Max grows imminent, Anna and Annie present one final flashback scene from before His Highness left. As the Kestrel approaches Providence Bay to return Blackjammer's Cutlass, Jylliana learns some new moves, and Max changes his mind. Get Jylliana's Logs, Kit's homebrew content, and general shitposts on our Patreon. Hosted on Acast. See acast.com/privacy for more information.

The Die As Cast
Ep. #47 -A Lotta' Big Words (The Expanding Waste)

The Die As Cast

Play Episode Listen Later Jul 29, 2024 72:43


Welcome, friends and adventures, to the The Die As Cast Podcast! Our Dungeon Master, Kevin Cork, takes our party through Kobold Press' post-apocalyptic Wasted West in the world of Midgard. Join in the tale of Gideon Sweets (Griffin Cork), Maeve Maldorava (Madeline Hunter Smith), Ilexyldean (Emma Brager) and Xisk (Diego Stredel).In this episode, our party asks a LOT of clarifying questions. Xisk asks for a protection, Gideon asks for answers Maeve asks for family, and Ilex asks for tiramisu.Join the Die As Cast Community!FACEBOOK: https://www.facebook.com/thedieascastTWITTER: https://twitter.com/TheDieAsCastWEBSITE: https://www.dieascast.com/JOIN OUR DISCORD: https://discord.gg/K59Bke958RKOBOLD PRESS: https://koboldpress.com/MAPS AND MELODIES (AKA THE BOY KING OF IDAHO): https://www.patreon.com/mapsandmelodiesM&M Songs Used in this Episode:The Astral ExpressSunriseDesert Mage CityAdditional Editing by DM-8 Multimedia Post-ProductionFOLLOW GRIFFIN ON SOCIAL MEDIA:TWITTER: https://twitter.com/GriffinCorkINSTAGRAM: www.instagram.com/griffincork/FOLLOW DIEGO ON SOCIAL MEDIA:TWITTER: https://twitter.com/diegostredelINSTAGRAM: https://www.instagram.com/diegostredel/FOLLOW MADELINE ON SOCIAL MEDIA:TWITTER: https://twitter.com/madelinehsmithINSTAGRAM: https://www.instagram.com/madelinehuntersmith/FOLLOW EMMA ON SOCIAL MEDIA:INSTAGRAM: https://www.instagram.com/cinderemmab/FOLLOW KEVIN ON SOCIAL MEDIA:TWITTER: https://twitter.com/kevincorkINSTAGRAM: https://www.instagram.com/kevincork/MASTODON: https://mastodon.social/@fundpirate@toot.community

WNC Original Music
Ep 184 Cutlass pt 2

WNC Original Music

Play Episode Listen Later Jul 26, 2024 40:11


Cutlass returns, mainly because they used that old trick of "forgetting" their scarf at my house. Real smooth, guys.    Listen and follow Cutlass at these places https://www.instagram.com/cutlassmusic/?hl=en https://open.spotify.com/artist/1Pf6RzYh2dCV4iK9mql7bo https://www.tiktok.com/@cutlassmusic?lang=en   Find Landon Gray here  https://open.spotify.com/album/06PFTY4QYTCLDr7ejxfjen?si=Led542LHRyiIqK8S2dj1yw  https://music.apple.com/us/album/honeymoon-eyes-feat-zenna-bryan/1729484850?i=1729484852       Subscribe to the podcast - https://podcasts.apple.com/us/podcast/wnc-original-music/id1378776313 https://www.iheart.com/podcast/wnc-original-music-31067964/ This link has all the other places to subscribe https://gopod.me/wncom   Follow on Social Media https://www.facebook.com/wncoriginalmusic https://www.wncoriginalmusic.com https://www.instagram.com/wnc_original_music/   All music used by permission   Cutlass is a dynamic alternative rock duo that draws heavy influence from the music of the late 60s to early 90s. Notable influences include sam cooke, jimi hendrix, the beatles, drake, cake, beethoven, soundgarden, prince, rihanna and the human condition.  Their 4th studio album "The East" will be released Winter 2022. Essential recordings include "Blue Bloods" "Used to" and "Slave". Cutlass can be found on all streaming platforms via the name Cutlass Music. 

The Reckon Yard Podcast
Episode 4 1987 Oldsmobile Cutlass Ciera

The Reckon Yard Podcast

Play Episode Listen Later Jul 15, 2024 40:38


Life in the big city is a big change for the Longmire family, thankfully America's favorite cheap car has their back.

Total Offroad Podcast
EP. 220 128 + 15 Bearings

Total Offroad Podcast

Play Episode Listen Later Jun 20, 2024 84:32


Todays engine start is from Joseph Giffin! Want to see what his rig looks like? Check it out here! https://www.instagram.com/monkey_foot_outdoors?igsh=dXVqMm5mdjZlM3Y3 Another BS episode. Mike took some time off to fix his daily. Derek is still working through wheeling rig maintenance. And Steve is getting his 442 Cutlass running after 6 years Thanks for listening!  More TOP Here! https://www.facebook.com/groups/679759029530199  https://www.patreon.com/Totaloffroadpodcast  https://www.youtube.com/@totaloffroadpodcast4296   Affiliate Companies we know You'll love! http://www.radesignsproducts.com/  https://www.facebook.com/profile.php?id=100091584686528  https://www.offroadanonymous.com/ https://crawleroffroad.com/ https://morrflate.com/ https://completeoffroad.com/ https://www.summershinesupply.com/ https://toolboxwidget.com/ https://coldspringcustoms.com/pages/radiopod   Follow Your Hosts! www.instagram.com/total_offroad_podcast www.instagram.com/low_kee_xj www.instagram.com/Dmanbluesfreak www.instagram.com/mikesofunny https://www.instagram.com/mr.mengo.xj/   All Caught Up with TOP? Go give these guys a listen!  https://open.spotify.com/show/1Pvslx6FEQJdTurCXOckBL?si=b2cacbe3d7d44f22 https://www.snailtrail4x4.com/snail-trail-4x4-podcast/

Deconstructing Comics
Critiquing Comics #236: “Clover and Cutlass” and “Coiled to Strike”

Deconstructing Comics

Play Episode Listen Later May 29, 2024 43:15


Clover and Cutlass is a Dungeons and Dragons-inspired fantasy YA comedy web comic by Toby Boyd. Adam joins Tim to discuss. Coiled to Strike is an anthology book from Wildstar Press, featuring numerous artists and writers, focused on the adventures of legendary wild west antihero Emory Graves. Jason joins Tim to critique. Brought to you … Continue reading Critiquing Comics #236: “Clover and Cutlass” and “Coiled to Strike”

In Wheel Time - Cartalk Radio
Merging Power and Design in Custom Cars with Randy Borcherding of Painthouse

In Wheel Time - Cartalk Radio

Play Episode Listen Later Apr 16, 2024 30:45


Rev up your engines and buckle up for an adventure with Randy Borcharding of Painthouse Texas, where we uncover the heart and soul poured into custom car projects. From a Chevy that boasts a hemi-headed big block engine to a revamped 1955 Chevy with a modern twist, you'll get an exclusive tour of what it takes to marry roaring power with sleek design. We don't just talk shop; we dive into the sweet symphony of exhaust notes and balance the cutting-edge with a touch of nostalgia, as we join Jeff in a sentimental journey through the evolution of model car building that many of us hold dear.As we shift gears, the conversation turns to the fine-tuning of chassis dyno tuning, where you'll grasp the intricacies of making a classic ride not just shine, but hum perfectly on any road. Discover the local gems for Corvette tuning and explore the challenges faced by DIY enthusiasts and the value of professional skill. We also spotlight the '68 Cutlass restoration with its LS3 heart and Oldsmobile disguise, and tackle the industry's urgent call for craftspeople in car upholstery. With personal tales and a shared passion for the automotive craft, this episode is a must-listen for those who appreciate the art behind the engine's roar.The Original Lupe' Tortilla RestaurantsLupe Tortilla in Katy, TexasSponsored by Gulf Coast Auto ShieldPaint protection and more!Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.---- ----- Want more In Wheel Time Car Talk any time? In Wheel Time Car Talk is now available on iHeartRadio! Just go to iheartradio.com/InWheelTimeCarTalk where ever you are.----- -----Be sure to subscribe on your favorite podcast provider for the next episode of In Wheel Time Car Talk and check out our live broadcast every Saturday, 8a-11aCT simulcasting on iHeartRadio, YouTube, Facebook, Twitter, Twitch and InWheelTime.com.In Wheel Time Car Talk podcast can be heard on you mobile device from providers such as:Apple Podcasts, Pandora Podcast, Amazon Music Podcast, Spotify, Google Podcasts, iHeartRadio podcast, TuneIn + Alexa, Podcast Addict, Castro, Castbox and more on your mobile device.Follow InWheelTime.com for the latest updates!Twitter: https://twitter.com/InWheelTimeInstagram: https://www.instagram.com/inwheeltime/https://www.iheart.com/live/in-wheel-time-car-talk-9327/https://www.youtube.com/inwheeltimehttps://www.Facebook.com/InWheelTimeFor more information about In Wheel Time Car Talk, email us at info@inwheeltime.comTags: In Wheel Time, automotive car talk show, car talk, Live car talk show, In Wheel Time Car Talk

In the Field Radio
Flashback: In the Field with Haiti Babii

In the Field Radio

Play Episode Listen Later Apr 10, 2024 18:44


Air Date:  June 2020 on 91.3FM WVKR- Another pandemic-era throwback, Erin Boogie sits down with Stockton, California rapper Haiti Babii via Zoom and discusses that  freestyle on Bootleg Kev and DJ Hed's radio show. But, the bizarre freestyle caught the attention of many and the viral moment led to co-signs from Rihanna and Meek Mill.Haiti Babii talks about dropping the visuals for "Cutlass" on Juneteenth, wearing Black designers, and opens about his impending fatherhood. It's another can't miss interview! Buzzsprout - Let's get your podcast launched!Start for FREEDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showShop Our Website!

Track Walking
159 - Everything of Value (Cody Smith)

Track Walking

Play Episode Listen Later Apr 1, 2024 81:32


Seth is glad there are people like Cody, Scott wonders why do it the hard way, and Cody has an emotional connection to his cars... and they all recall the smell of 80's cars. Cody Smith joins us to talk about losing everything, racing a Cutlass, and snowmobiles. Cody's Youtube Cody's Instagram ------ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Robertson-Racing.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Track Walking Chats - Group⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Track Walking - Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Track Walking - Instagram⁠

KASIEBO IS NAKET
Two Guys Inflict Cutlass Wounds On Fishmonger Over 6 Cedis

KASIEBO IS NAKET

Play Episode Listen Later Mar 21, 2024 58:07


Akyem Ofoase District Police Command is on a manhunt for two men for allegedly inflicting cutlass wounds on a fishmonger over GHC6 debt

Hank Watson's Garage Hour podcast
02.20.24: Mercedes, Ford & Volvo Jump Electric Ship, Airlines Can't Fly DIE & Dealerships Can't Sell Nonsense, Fob Hopping & Camaro Stealing VS Anti-Theft Devices (Car Keys), Coffee VS Bourbon, iMac Wins VS iCar Flops, Custom Reloads, Gearhe

Hank Watson's Garage Hour podcast

Play Episode Listen Later Mar 8, 2024 65:02


While the cohosts are away building bumpers, bullets and blackout tint, Hostus Maximus will play (with Tomahawk, all show long).  Gasoline and caffeine.  7mm-08 and .308.  Bourbon and coffee.  iMacs, iPhones and iCars (win, win and lose).  Automakers losing their electric religion as shoppers lose the fad.  Integra and Cutlass.  Woke fails at 30,000ft and corporates bail on ground-level politics. Meanwhile, we've got some stand-in cohosts:  Dave Mustaine, Thomas Sowell, Layne Staley, Mike Patton, Rob Zombie, plus CHUDS, troglodites, morlochs and the lung brush.

Hank Watson's Garage Hour podcast
02.20.24(MP3): Mercedes, Ford & Volvo Jump Electric Ship, Airlines Can't Fly DIE & Dealerships Can't Sell Nonsense, Fob Hopping & Camaro Stealing VS Anti-Theft Devices (Car Keys), Coffee VS Bourbon, iMac Win VS iCar Flop, Custom Reloads, Gea

Hank Watson's Garage Hour podcast

Play Episode Listen Later Mar 8, 2024 65:02


While the cohosts are away building bumpers, bullets and blackout tint, Hostus Maximus will play (with Tomahawk, all show long).  Gasoline and caffeine.  7mm-08 and .308.  Bourbon and coffee.  iMacs, iPhones and iCars (win, win and lose).  Automakers losing their electric religion as shoppers lose the fad.  Integra and Cutlass.  Woke fails at 30,000ft and corporates bail on ground-level politics. Meanwhile, we've got some stand-in cohosts:  Dave Mustaine, Thomas Sowell, Layne Staley, Mike Patton, Rob Zombie, plus CHUDS, troglodites, morlochs and the lung brush.

V8 Radio
Cruise into Nostalgia with the Latest V8 Radio Podcast: Unveiling a Cutlass Supreme & More!

V8 Radio

Play Episode Listen Later Feb 6, 2024 75:47


Tune in for a thrilling ride down memory lane with the latest episode of the V8 Radio Podcast, hosted by the dynamic duo of Kevin Oeste and Mike "Q-Ball" Clarke. This episode is packed with exciting content that will pique your interest, whether you're a seasoned restoration pro or a casual admirer of automotive history. Here's a sneak peek at what awaits you: Kevin's Creative Hub Revealed: Dive into a tour of Kevin's brand new creative haven, a space brimming with inspiration and sure to spark your curiosity. What innovative projects are brewing within its walls? Tune in to find out! Preserving the Past: Resurrecting Old Recordings and Videos: As the V8 Speed and Resto Shop and V8TV gear up for their 20th anniversary, the conversation delves into the fascinating process of rediscovering and digitizing old recordings and videos. This is a treasure trove of automotive history waiting to be unearthed! The Unveiling of a Beauty: 1971 Oldsmobile Cutlass Supreme Convertible: Kevin Q-Ball touch on the stunningly restored 1971 Oldsmobile Cutlass Supreme convertible, a long-time family heirloom car brought back to life by the skilled hands at V8 Speed and Resto. Fueling Your Automotive Knowledge: Automotive Trivia Challenge: Put your car knowledge to the test with a fun and engaging automotive trivia challenge hosted by the dynamic duo. Are you ready to show off your expertise? And Much More! This episode is just a taste of the exciting content packed into this latest V8 Radio Podcast. Buckle up for laughter, insightful discussions, and a healthy dose of automotive passion. Don't miss out on this thrilling ride! Subscribe to the V8 Radio Podcast today and be the first to experience the latest automotive adventures with Kevin Oeste and Mike "Q-Ball" Clarke. P.S. Share your thoughts on the episode in the comments below! What aspect are you most excited about? What car-related stories would you like to hear in the future? Let's keep the conversation rolling! Ed Tillrock Art: https://edtillrockpencilspecialist.bigcartel.com/category/large-painting-prints-giclee-print-selections

Mysteries of The Ohio Valley
The Maroon Cutlass - Debra Capiola

Mysteries of The Ohio Valley

Play Episode Listen Later Jan 10, 2024 16:58


On St. Patrick's Day of 1977, Debra Capiola walked to school by herself. She never made it to the bus stop. What happened on that day? Could her killer be behind more than just this murder?https://www.reddit.com/r/UnresolvedMysteries/comments/q5nkxc/was_there_a_serial_killer_operating_in_western_pa/https://archive.triblive.com/news/cecil-man-gets-life-for-1977-slaying/https://www.newspapers.com/article/pittsburgh-post-gazette-wash-co-murders/51099361/https://inmatelocator.cor.pa.gov/#/Result

The Rollback Show
BNoiseTV

The Rollback Show

Play Episode Play 60 sec Highlight Listen Later Jan 8, 2024 96:21 Transcription Available


Remember the days when shifting gears on the open road was the only therapy you needed? Our latest episode transports you back to that era, while cruising through the ever-changing landscape of car culture in Chicago. From the tight-knit camaraderie of events like Chicago vs. Everybody, to navigating the tricky dance with law enforcement, we cover the full spectrum of what it means to be part of this pulsating community. We open up about our personal journeys with classic cars, like the '72 Cutlass, and the deep connections we've forged with these mechanical marvels.As night falls and the White Castle parking lot lights up, we swap stories of the vibrant late-night scene, balancing our passion for cars with the dynamics of relationships. We shine a light on the importance of safety, respect, and the need to protect the integrity of these gatherings from those who would disrespect them. The episode also takes a turn towards the more serious issues of violence and conflict resolution among the youth, emphasizing the power of community and strength of character over the persuasion of carrying a weapon.Join us along with the crew from BNoise TV as we recall the skillful showmanship of legendary drivers like the 350Z master and share laughs over tire-changing escapades gone awry. We pay homage to influential figures like Smurf, whose legacy continues to shape the scene, and discuss the dreams of creating a safe, legal space for enthusiasts to thrive. So, buckle up, because this episode is an epic ride through the heart of car culture, capturing the essence of why we love this community so fiercely.Support the showFollow our instagram for more updates http://instagram.com/therollbackshow

Hank Watson's Garage Hour podcast
12.10.23: Gifts for Gearheads (w/ Guns & Sips & Tools & Fancy Waxy Polish), + Brother James' Creekwater Whiskey, .40 S&W Wrecking Guns, Two Degrees from a Cutlass, 719 Traffic VS 24 Hours of LeMans, & A Family Brawl at the Broadmoor

Hank Watson's Garage Hour podcast

Play Episode Listen Later Dec 19, 2023 65:16


it's Christmastime, so why not grab the punching handle by the blondes and try the Home Depot China Challenge yourself?  …Because the gearheads in your life like what we like, we have ideas:  fine waxes (Zymol), special sips (Gilark Creekwater rye whiskey, Axe & Oak, and 291), good tools (used or new, it's about the function), obscure watches (make the effort), tint and clear-bra from E.A.S., slot cars, Atari and N64, and the gift of babysitting (so folks can make some time).  Most importantly, shop local and seek something that's better than expected. While we're at it:  How about a fistfight over chablis at the Broadmoor?  How about Subie politics and 4Runner douchebaggery on the Front Range?  Who isn't two degrees from a Cutlass?  …And why do all the racoons have knives?

Hank Watson's Garage Hour podcast
12.10.23 (MP3): Gifts for Gearheads (w/ Guns & Sips & Tools & Fancy Waxy Polish), + Brother James' Creekwater Whiskey, .40 S&W Wrecking Guns, Two Degrees from a Cutlass, 719 Traffic VS 24 Hours of LeMans, & A Family Brawl at the Broad

Hank Watson's Garage Hour podcast

Play Episode Listen Later Dec 19, 2023 65:16


it's Christmastime, so why not grab the punching handle by the blondes and try the Home Depot China Challenge yourself?  …Because the gearheads in your life like what we like, we have ideas:  fine waxes (Zymol), special sips (Gilark Creekwater rye whiskey, Axe & Oak, and 291), good tools (used or new, it's about the function), obscure watches (make the effort), tint and clear-bra from E.A.S., slot cars, Atari and N64, and the gift of babysitting (so folks can make some time).  Most importantly, shop local and seek something that's better than expected. While we're at it:  How about a fistfight over chablis at the Broadmoor?  How about Subie politics and 4Runner douchebaggery on the Front Range?  Who isn't two degrees from a Cutlass?  …And why do all the racoons have knives?

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Want to help define the AI Engineer stack? Have opinions on the top tools, communities and builders? We're collaborating with friends at Amplify to launch the first State of AI Engineering survey! Please fill it out (and tell your friends)!If AI is so important, why is its software so bad?This was the motivating question for Chris Lattner as he reconnected with his product counterpart on Tensorflow, Tim Davis, and started working on a modular solution to the problem of sprawling, monolithic, fragmented platforms in AI development. They announced a $30m seed in 2022 and, following their successful double launch of Modular/Mojo

#WithChude
Josephina 'Phyna' Otabor sits #WithChude: Depression, cutlass wounds and ‘every woman has had an abortion'

#WithChude

Play Episode Listen Later Sep 6, 2023 8:25


Exclusive Patron-only Content Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 10, 2023 52:10


We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe

Blue Angel Phantoms
The TRUE STORY of the Blue Angels' F7U Cutlass Featuring Edward "Whitey" Feightner

Blue Angel Phantoms

Play Episode Listen Later Aug 6, 2023 48:50


With its sleek and unusual tailless design, the Vought F7U Cutlass seemed like a perfect fit for the U.S. Navy's Flight Demonstration Team, the Blue Angels. However, as the Blues would find out, the aircraft's complex development history led to significant challenges and several near brushes with tragedy. In this brand new episode of the Blue Angel Phantoms Podcast, you'll hear directly from RADM Edward "Whitey" Feightner, a WWII Ace and Navy test pilot, who was charged with developing the Cutlass program on the Blue Angels for the 1952 airshow season. What makes this particular interview all that more special, is that it was conducted over 25 years ago by aviation historian, Nicholas A. Veronico, who is publicly sharing this historic treasure for the first time.Originally recorded as research for Veronico's book "The Blue Angels: A Fly-By History", Rear Admiral Feightner provides detailed insight into the Blue Angels reformation after the Korean War, the selection process for the F7U, and the painstaking task of performing aerial demonstrations with the Cutlass' faulty controls. RADM Feightner also discusses the Blues' challenges with the Grumman F9F-5 Panther and recalls his tragic final day on the Team in which pilot Buddy Rich was lost in a midair collision at NAS Corpus Christi. RADM Feightner's naval career spanned over 33 years, where he achieved early success flying both the F4F Wildcat and F6F Hellcat during WWII and earning 9 aerial victories. Post war, he became a prolific test pilot for the Navy where he was introduced to the F7U Cutlass and holds the distinction of being the only pilot to launch and recover the F7U-1 on a carrier. In 1952, Feightner was asked to take command of the Blue Angels and install the F7U as the Team's primary demonstration platform. Aware of the F7U's extreme limitation, Feightner was able to persuade the Navy to select the F9F Panther instead and reserve the Cutlass as a solo act. This resulted in Roy "Butch" Voris, the Team's first flight leader, to return and take point and allow Feightner to focus on the F7U. The airshow season was plagued by technical problems including a loss of flight controls on several occasions. With increased strain on maintenance and growing safety concerns, the Blue Angels canceled the F7U program after 7 months. RADM Feightner retired in 1974 and passed away on April 1, 2020 at the age 100.  Special thanks to Nick Veronico for sharing this cassette tape with the Blue Angel Phantoms YouTube Channel.  The interview featured within this video is protected by copyright owned by Nicholas A. Veronico. Any unauthorised reproduction, distribution, or public display of this interview or any part thereof is strictly prohibited without written consent from Mr. Veronico. © 2023 Nicholas A. VeronicoSupport the show

Titan Up The Defense
Episode 415: Titan Up the Defense 340- New Defenders #148

Titan Up The Defense

Play Episode Listen Later Jul 26, 2023 114:56


THOOM! We read the fun, farcical nonsense that was Defenders #148. Topics include: Smurf accents, laser honey, the return of Cutlass and Typhoon, Groucho Marx, and locked room mysteries. Enjoy! Enjoy! If you enjoy the show and would like access to bonus materials, please consider donating at patreon.com/ttwasteland You can get into touch with us at ttwasteland@gmail.com or Titan Up the Defense PO Box 20311 Portland, OR 97294

WNC Original Music
Ep 168 Cutlass pt 1

WNC Original Music

Play Episode Listen Later Jul 20, 2023 40:19


Cutlass jumps on the podcast and give all the opinions you are thinking but know better than to say out loud! Just kidding they're funny, laid back guys.    Listen and follow Cutlass at these places https://www.instagram.com/cutlassmusic/?hl=en https://open.spotify.com/artist/1Pf6RzYh2dCV4iK9mql7bo https://www.tiktok.com/@cutlassmusic?lang=en   Find Dark City Kings here https://thedarkcitykings.bandcamp.com/track/honey-bee   Subscribe to the podcast - https://podcasts.apple.com/us/podcast/wnc-original-music/id1378776313 https://www.iheart.com/podcast/wnc-original-music-31067964/ This link has all the other places to subscribe https://gopod.me/wncom   Follow on Social Media https://www.facebook.com/wncoriginalmusic https://www.wncoriginalmusic.com https://www.instagram.com/wnc_original_music/   All music used by permission   Cutlass is a dynamic alternative rock duo that draws heavy influence from the music of the late 60s to early 90s. Notable influences include Sam Cooke, Jimi Hendrix, the Beatles, Drake, Cake, Beethoven, Soundgarden, Prince, Rihanna and the Human Condition.  Their 4th studio album "The East" will be released Winter 2022. Essential recordings include "Blue Bloods" "Used to" and "Slave". Cutlass can be found on all streaming platforms via the name Cutlass Music. 

The Great Detectives of Old Time Radio
Dangerous Assignment: Kroner Cutlass (EP4067)

The Great Detectives of Old Time Radio

Play Episode Listen Later May 3, 2023 34:05


Today's Mystery:Steve goes to Europe to clear the U.S. of having stolen a cutlass that's a powerful national symbol. The problem? A U.S. Navy officer's wife is in possession of one of the jewels from the cutlass.Original Radio Broadcast Date: February 10, 1951Originated from HollywoodStars: Brian Donlevy as Steve Mitchell, Herb Butterfield as the CommissionerSupport the show monthly at patreon.greatdetectives.netSupport the show on a one-time basis at http://support.greatdetectives.net.'Mail a donation to: Adam Graham, PO Box 15913, Boise, Idaho 83715Take the listener survey at http://survey.greatdetectives.netGive us a call at 208-991-4783Follow us on Instagram at http://instagram.com/greatdetectivesFollow us on Twitter @radiodetectivesJoin us back here tomorrow for another detective drama from the Golden Age of Radio.

The Don Tony Show / Wednesday Night Don-O-Mite
Wednesday Night Don-O-Mite 4/12/23: Jeff Hardy Takes Victory Road Back To AEW

The Don Tony Show / Wednesday Night Don-O-Mite

Play Episode Listen Later Apr 13, 2023 105:32


Wednesday Night Don-O-Mite (4/12/2023) hosted by Don Tony and presented by BlueWire Some Topics Discussed: AEW Dynamite 4/12/23 results: Jeff Hardy returns; Sting returns, complete with pom poms and a mic; MJF/Darby get verbal; Wardlow trashes Hobbs' Cutlass; Elite and BCC brawl and much more Sting? Goldberg? Jeff Hardy? DT expects at least one retirement match to happen at AEW All In PPV DT immensely enjoyed Keith Lee vs Chris Jericho AEW Dynamite for reasons older wrestling fans will appreciate Elite media foolishly thinks AEW drawing 30% for All In at Wembley Stadium (Seating capacity: 90,000) would be s a success NXT 4/11/23 results and TV rating (Last week: 555K) Bron Breakker recent 'change in attitude' will reap huge rewards when promoted to main roster Despite many top future stars and banger matches, why are NXT ratings stuck in the mud? Positive Update: Grizzled Young Veterans (The Dyad) are still with WWE/NXT NXT Battleground to go head-to-head with AEW Double or Nothing PPV (Sun 5/28/23) Bret Hart thinks AEW has gone in a bad direction with excessive blood and violence. Impact Wrestling 'Rebellion' PPV (4/16/23) Preview and Predictions Courting Court: MLW Underground quietly wraps up Ten week run next week on Reelz and lawsuit against WWE in jeopardy despite non-existent news coverage. Will Reelz extend a new deal with AEW?

First Baptist Gray
Among the Ruins | '72 Cutlass

First Baptist Gray

Play Episode Listen Later Mar 20, 2023 36:34


In this week's episode, Pastor Randy continues his sermon series through the book of 1 Corinthians with a sermon entitled, "72 Cutlass."

1001 RADIO DAYS
THE KRONER CUTLASS and THE NAZI BUZZ BOMB DANGEROUS ASSIGNMENT

1001 RADIO DAYS

Play Episode Listen Later Mar 1, 2023 64:54


Dangerous Assignment was an NBC radio drama starring Brian Donlevy broadcast in the US 1949–1953. It preceded the James Bond character and books and may well have inspired them. "The Commissioner" sent US special agent Steve Mitchell to exotic locales all over the world, where he would encounter adventure and international intrigue in pursuit of some secret. Each show would always open with a brief teaser scene from the episode to follow. After the intro, Steve Mitchell would be summoned to the office of 'The Commissioner', the regional head of an unnamed US State Department agency created to address international unrest as it affected U.S. interests. "The Commissioner" would give background information, explain the current situation and tell Steve his assignment. Steve's cover identity, in almost all his adventures, was that of a suave debonair foreign correspondent for an unnamed print publication — his assignments invariably involved deceit, trickery, and violence, all tied together into a successful resolution by the end of the episode. Dangerous Assignment started out as a replacement radio series broadcast in the US on the NBC radio network in the summer of 1949; it became a syndicated series in early 1950. Reportedly, star Brian Donlevy himself was the one who brought the show to NBC. In the American radio shows, Donlevy was both the protagonist within the action and the narrator, giving the show "a suspenseful immediacy." The only other regular actor on the radio shows was Herb Butterfield, who played "The Commissioner." Many stage and screen actors appeared as guest-stars including, among many others, William Conrad, Raymond Burr, Richard Boone, and Eddie Cantor The radio show started out as a seven-week summer replacement series broadcast on NBC Saturdays 8:30–9 PM EST. It premiered July 9, 1949; the last episode was on August 20, 1949. A character portraying the Commissioner's secretary, 'Ruthie', was played by Betty Moran — it is hinted that there was some romantic history between Ruthie and Steve Mitchell. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Cedarville Stories
S8:E2 | Jeff Haymond

Cedarville Stories

Play Episode Listen Later Jan 11, 2023 34:27


Faithfully ServingJeff Haymond can be easily spotted on campus as he drives up in his vintage Cutlass convertible. But his love for vintage cars is just one small thing that makes his life interesting. Jeff served in the United States Air Force for 29 years, reaching the rank of Colonel before retiring in 2010. During those 29 years, Jeff collected numerous stories. But one he will never forget was his day at the Pentagon on September 11, 2001. Not only did he serve his country, but he also served a stranger in a remarkable way. While on a base in California, Jeff and other military members all donated blood, with the possibility of being matched with a recipient in need. Later he got a phone call saying his blood was a life-saving match for a 15-year-old Australian girl. Without hesitation, Jeff donated his stem cells to a girl with non-Hodgkin's lymphoma, knowing he'd never meet her. After retiring from the military, Cedarville had a faculty opening, and Jeff decided to pursue it. He started as a professor of business, moved to assistant chair, and now finds himself the Dean of the Robert W. Plaster School of Business. Jeff is passionate about equipping and training his students to be prepared to face a hostile culture as they enter the business world after graduation. 

One Life Left's Podcast
#495 - Cracked Cutlass

One Life Left's Podcast

Play Episode Listen Later Jan 11, 2023 68:59


Happy New Year, dear listeners, and what a start to the year we have for you. Yes, you! Emerging, Ursula Andress-like, from an ocean/river near you is this week's Super Special Guest: Charles Cecil MBE! Alongside game dev anecdotes, Sir Charles also reveals LOADS of upcoming game exclusives just for us - sorry Wendy! (only joking... or am i...?) The one thing Charles didn't do was announce any famous deaths. Which is a good thing. We also reveal the winner of World Cup 2022, are distinctly unsurprised at Nintendo having no next gen console out in 2023, discuss Steam Deck game validation shenanigans and get a bit miffed at new(ish) game price drops! Meanwhile we have a date for our next Maraoke: Saturday 21st January. Further details can be found at the link below. What did you get for Christmas? Have you ever told Charles Cecil a secret he couldn't keep?? Why not tell us all about it as well at team@onelifeleft.com or drop us a letter in our Discord server. Link below! Cheerio, Team OLL x Links: The OLL Everything Link!: http://hello.onelifeleft.com/ The Maraoke Everything Link!: http://hello.marao.ke/ The Charles Cecil Twitter Link! https://twitter.com/charlescecil The Revolution Software Link! https://revolution.co.uk/ Reviews: Immortals: Fenix Rising Disney Dreamlight Valley Immortality Learn more about your ad choices. Visit megaphone.fm/adchoices

The Patrick Madrid Show
The Patrick Madrid Show: January 04, 2023 - Hour 3

The Patrick Madrid Show

Play Episode Listen Later Jan 4, 2023 51:06


Patrick answers listener questions about vaccines and masks Daniel - How do I explain and talk to this woman who didn't want to shake hands with me at Church, but is willing to give out communion as a minister? Fran – You are out of your lane talking about masks and covid vaccines Jane – I was shocked you related the football players collapse to possible vaccine use Patrick reads a couple more emails thanking him for talking about “the jab” Maria – I'm a nurse and had to get the vaccine so I wouldn't be fired.  When I mentioned the vaccine could've affected the football player, the doctor yelled at me. Kristy - Not only is it the young suddenly dying, but the elderly are suddenly dropping dead as a well.  John - Regarding masks, my wife is a nurse and she's forced to wear a mask for 10 hours a day.  Oxygen levels are dropping when wearing a mask during the day. Email from a listener asking who is “Fr. Miguel Gutty” Jenny - I had heart surgery because I had a hole in my heart.  Sometimes these problems go undetected.  Justin –Miguel Cutty is a priest that drives a Cutlass supreme and they called him 'Cutty'. Lisa - I'm a retired nurse and I worked with COVID patients for two years and I retired instead of taking the jab.  I saw a lot of side effects with the vaccines.  I was disheartened by the medical field.  Follow the money.

KSP
Episode 236 "Cutlass"

KSP

Play Episode Listen Later Dec 5, 2022 90:33


Episode 236 "Cutlass" by KSP

Hank Watson's Garage Hour podcast
11.10.22: Blind-Spot Indicators - Are You the Driver or Meat in the Seat? Loud Pipes Save Lives (& Loud People Save Ideas), Exploding Scooters, Tech VS You (Was Sagan Right?), Two Degrees from a Cutlass, + (Bad) News from Area 51 & 2A Goodness In

Hank Watson's Garage Hour podcast

Play Episode Listen Later Nov 15, 2022 56:57


You can't get this kind of awesomeness without a warrant (or a prescription). This Garage Hour does geek right, with a firearm-friendly amendment in Iowa, big AM-talk in San Diego (board-ops, fer shur), the risk science poses to senseless sinners, why New Yorkers are scared shedless of chintzy scooters from pinko China (apartment flambé, anyone?), and why Kevin Bacon can't hold a candle to our "Two Degrees from a Cutlass" theory. There's also some reminiscence about a client with a private dirt-oval, and an exposé of what happens to your skillset when the electro-nannies start doing your legwork. But wait! the Gearhead Consultancy also explains the "Drowning Man" theory, and we take a look at how the thugly F.B.I. is taking a rope to the progenitor of Dreamland Resort (.com) and his affection for Area 51. There's also Machines of Loving Grace, Goatsnake, St. Vitus, Five Horse Johnson, Primus and Air.

Hank Watson's Garage Hour podcast
11.10.22 (MP3): Blind-Spot Indicators - Are You the Driver or Meat in the Seat? Loud Pipes Save Lives (& Loud People Save Ideas), Exploding Scooters, Tech VS You (Was Sagan Right?), 2 Degrees from a Cutlass, + (Bad) News from Area 51 & 2A Goodness

Hank Watson's Garage Hour podcast

Play Episode Listen Later Nov 15, 2022 56:57


You can't get this kind of awesomeness without a warrant (or a prescription). This Garage Hour does geek right, with a firearm-friendly amendment in Iowa, big AM-talk in San Diego (board-ops, fer shur), the risk science poses to senseless sinners, why New Yorkers are scared shedless of chintzy scooters from pinko China (apartment flambé, anyone?), and why Kevin Bacon can't hold a candle to our "Two Degrees from a Cutlass" theory. There's also some reminiscence about a client with a private dirt-oval, and an exposé of what happens to your skillset when the electro-nannies start doing your legwork. But wait! the Gearhead Consultancy also explains the "Drowning Man" theory, and we take a look at how the thugly F.B.I. is taking a rope to the progenitor of Dreamland Resort (.com) and his affection for Area 51. There's also Machines of Loving Grace, Goatsnake, St. Vitus, Five Horse Johnson, Primus and Air.

UNHEARD
Episode 119 | "Vanilla"

UNHEARD

Play Episode Listen Later Oct 19, 2022 69:06


Is a 'B' album good for Lil Baby(1:55)? Vance sparks a game of Would You Rather: chart-topping single or good album(31:31)? Chance creatively directs Ice Spice's "Munch Meal"(35:15). Did Steve Lacy sell out his core fans(37:55)? Taj recaps Ye's since-deleted Drink Champs interview and questions journalistic integrity in music(47:57). Plus much more! This week's music in order played is MadMarcc "Young Nigga Shit," Ojerime "All I Do," MAVI "Reason!," and August 08 x Schoolboy Q "Cutlass."

The Pirate History Podcast
Episode 270 - Cutlass Culliford

The Pirate History Podcast

Play Episode Listen Later Jul 26, 2022 36:45


Today we discuss the origin of Cutlass Culliford. The Pirate History Podcast is a member of the Airwave Media Podcast Network. If you'd like to advertise on The Pirate History Podcast, please contact sales@advertisecast.com Sources : The Memoir of Captain William Willock Pirates of the Eastern Seas by Charles Grey The Pirate's Pact by Douglas R. Burgess Jr. The Calendar of State Papers Colonial, America and West Indies from British History Online Honor Among Thieves by Jan Rogozinski Pirate Hunter by Richard Zachs Captain Kidd by Jan Rogozinski Learn more about your ad choices. Visit megaphone.fm/adchoices

Best of The Steve Harvey Morning Show
What Does Pretty Translate To?

Best of The Steve Harvey Morning Show

Play Episode Listen Later May 4, 2022 3:51


Good morning and welcome to the ride!  Wonderlove is in the Cutlass. See omnystudio.com/listener for privacy information.

The Midnight Library
S5 Ep6: Beguiling Beheadings

The Midnight Library

Play Episode Listen Later Dec 13, 2021 34:56


Welcome to The Reading Room, Dear Guest!  Why doesn't anyone want to sit beside Mr. Cutlass, our totally just-for-show, prop guillotine??  Oh well.  Please find a comfy spot and pull the blankets up tight around your chilly neck because tonight our heart-stopping topic is all about BEHEADINGS!  When did they start?  How were they improved?  Why were they popular?  Who tried to speak after they were beheaded?  And why are they a hot topic in more ways than one in The Midnight Library?  Head right this way! Special Thanks to Sounds Like an Earful Music Supply for the amazing music AND sound design during this episode.