POPULARITY
Chris R gives a message of depth and weight for CA group Spiritually Lit in the Netherlands on 04-08-25
Au cœur de la nuit, les auditeurs se livrent en toute liberté aux oreilles attentives et bienveillantes de Valérie Darmon. Pas de jugements ni de tabous, une conversation franche, mais aussi des réponses aux questions que les auditeurs se posent. Un moment d'échange et de partage propice à la confidence pour repartir le cœur plus léger.Distribué par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Invitée : Alexia Chandon-Piazza Animation : Anais et Chris Réal: Candice Crédits images : éditions Cambourakis The post LGM 2#6 – Interview Alexia Chandon-Piazza – Les Poils first appeared on Radio Vostok.
Invitée : Alexia Chandon-Piazza Animation : Anais et Chris Réal: Candice Crédits images : éditions Cambourakis The post LGM 2#6 – Interview Alexia Chandon-Piazza – Les Poils first appeared on Radio Vostok.
Is the Libertarian Party growing up into a real party, complete with corruption? :: Call from Skeeter, we disagreed. Again. :: Are borders necessary? :: Israel and Gaza, caller calls for a violent end :: Dave Ridley Update on NH move to block international interference :: The best executive order yet? :: 2025-02-22 :: Chris R., Lori, Riley
Is the Libertarian Party growing up into a real party, complete with corruption? :: Call from Skeeter, we disagreed. Again. :: Are borders necessary? :: Israel and Gaza, caller calls for a violent end :: Dave Ridley Update on NH move to block international interference :: The best executive order yet? :: 2025-02-22 :: Chris R., Lori, Riley
Will Trump 2.0 be different or better? :: Trump promises to slash red tape in Washington, we're skeptical :: Sarah begrudgingly accepts that we need cars, but believe states should punish drivers more :: Is capitalism inherently bad, or do we have it at all in the US? :: Caller discusses David Lynch's film legacy :: Names and nicknames :: 2025-01-018 :: Chris R., Bonnie, Stu
"Ho, Ho, Ho Merry Christmas and Happy New Year!" - Honest Abe L. It is Christmas time everyone and what better way to ring in the season than with a heartwarming tale of adventure and found family! We are joined by our favorite Chris, Chris R, tonight as we talk about this anime classic - Tokyo Godfathers. So pull up a chair, get some hot coco, and join us here, on 3DTAA. If you want to reach out leave a comment or find us on X (the artist we wish was still named Twitter) or email us! (threedadstalkingaboutanime@gmail.com) Check our site for random other stuff! - https://threedadstalkingab.wixsite.com/website (Disclaimer: We are not a spoiler-free show, listen at your own risk to the episodes and shows that are discussed.)
Chris R. carries a message of depth and weight for the Fall Roundup in Wichita, KS in November of 2024. Find us at https://maddogspeakers.com/.
Chris R. carries a message of depth and weight for the River City Speaker Meeting in Sacramento, CA on 10/25/2024. Find us at https://maddogspeakers.com/.
Chris R. gives share a message of depth and weight for the Emotional Sobriety Conference held via Zoom in June of 2024 hosted by the Everything or Nothing Group in Gainesville, FL. Find us at https://maddogspeakers.com/.
Welcome to the jungle, we have fun and games and characters that just stop by to talk or whatever. Matt Riggs, Joe Fernandez and Andy Fleming chat it up with the wonderful Kayley Horton. If you like what we do, go to our patreon and give us all your money. Just search Dumb Fun. Theme song by Brett Gretzky and logo by Chris R
Classic Episode: Dungeons and Dragons (Episode 52 - Released June 28th 2021)The boys are on the run again in Bolivia, stick around for classic episode Dungeons and Dragons. (Original Show Notes)Episode 052 of That Was Disappointing is Live. Roll (69) for Initiative.Today's Theme: D&D. That stands for Dragons & Dungeons, ya goofballs.Our first topic is your most fun/wasted day in D&D. It's adorable that Art thinks any session of D&D wouldn't be defined as “wasted” by the general public. He needs a real hobby, like stamp collecting. Hell, at least you'd be less likely to die a virgin.Our second topic is your favorite character you've played. No, Chris R. isn't talking about LARPing… it's somehow even more pathetic. So, apparently everyone doesn't play a Conan or Dumbledore-clone? Who knew!? Oh duh… I forgot about Legolas.Our third topic is how D&D has affected your outside life. In other news, people who play D&D actually have a life. Good news for our freshest (read: least pathetic) panelist, Jon, who's new to the game. Or did things already suck for you to begin with?Our fourth (not final!) is your best/worst item in D&D. Does a MOAB count or is that just wishful thinking? Tim says there's more to it than just pointy objects you stab each other with. Pffft! Unless the jousting sticks from American Gladiators are DLC, I'm not interested.Our final topic is… why? Yep, our resident smartass/ jerkoff, Lex, doesn't know a f'n thing about D&D, so he needs to defer to the rest of the panel to understand the appeal. Producer Kenny, perhaps you can throw someone a lifeline in the future. kthnxbye
Will Biden Legally Drone Strike Mar-a-Lago? :: Satanists in FL offer to fill school roles after DeSantis signs law allowing religious chaplains :: ATG goes back to murdering Americans :: Pastor's lawyers blame 12 year old for initiating sex :: Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30
Will Biden Legally Drone Strike Mar-a-Lago? :: Satanists in FL offer to fill school roles after DeSantis signs law allowing religious chaplains :: ATG goes back to murdering Americans :: Pastor's lawyers blame 12 year old for initiating sex :: Chris R., Lori, Riley
Will Biden Legally Drone Strike Mar-a-Lago? :: Satanists in FL offer to fill school roles after DeSantis signs law allowing religious chaplains :: ATG goes back to murdering Americans :: Pastor's lawyers blame 12 year old for initiating sex :: Chris R., Lori, RileyBecome a supporter of this podcast: https://www.spreaker.com/podcast/heartland-newsfeed-radio-network--2904397/support.
Will Biden Legally Drone Strike Mar-a-Lago? :: Satanists in FL offer to fill school roles after DeSantis signs law allowing religious chaplains :: ATG goes back to murdering Americans :: Pastor's lawyers blame 12 year old for initiating sex :: Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30Become a supporter of this podcast: https://www.spreaker.com/podcast/heartland-newsfeed-radio-network--2904397/support.
A $91T Government Problem :: Discussing Secession and the Declaration of Independence :: The governments on earth have rung up 91T in debt with no way to pay it back :: An Idaho book restriction law is causing libraries to close their doors :: Japan Supreme Court rules that it's long running program of forced sterilization was unconstitutional :: 2024-07-03 Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30
A $91T Government Problem :: Discussing Secession and the Declaration of Independence :: The governments on earth have rung up 91T in debt with no way to pay it back :: An Idaho book restriction law is causing libraries to close their doors :: Japan Supreme Court rules that it's long running program of forced sterilization was unconstitutional :: 2024-07-03 Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30Become a supporter of this podcast: https://www.spreaker.com/podcast/heartland-newsfeed-radio-network--2904397/support.
A $91T Government Problem :: Discussing Secession and the Declaration of Independence :: The governments on earth have rung up 91T in debt with no way to pay it back :: An Idaho book restriction law is causing libraries to close their doors :: Japan Supreme Court rules that it's long running program of forced sterilization was unconstituional :: 2024-07-03 Chris R., Lori, Riley
A $91T Government Problem :: Discussing Secession and the Declaration of Independence :: The governments on earth have rung up 91T in debt with no way to pay it back :: An Idaho book restriction law is causing libraries to close their doors :: Japan Supreme Court rules that it's long running program of forced sterilization was unconstituional :: 2024-07-03 Chris R., Lori, RileyBecome a supporter of this podcast: https://www.spreaker.com/podcast/heartland-newsfeed-radio-network--2904397/support.
Assange Plea Deal and on his way home :: Electronic Price Tags :: Inflation, Economic Schools of Thought and Government :: Should point systems for driving records be mandatory? Sara Thinks so :: Government inhibits technological advances, at the cost of lives :: Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30
Assange Plea Deal and on his way home :: Electronic Price Tags :: Inflation, Economic Schools of Thought and Government :: Should point systems for driving records be mandatory? Sara Thinks so :: Government inhibits technological advances, at the cost of lives :: Chris R., Lori, Riley
YOU ARE TEARING ME APART, LISA!! The Room Full Movie Reaction Watch Along: https://www.patreon.com/thereelrejects Save Money & Cancel Unwanted Subscriptions By Going To https://rocketmoney.com/rejects Having heard for SO LONG of Tommy Wiseau's Magnum Opus of So-Bad-it's Good Cult Cinema, Tara Erickson is FINALLY here to give her First Time Reaction, Commentary, Breakdown, & Full Spoiler Review for the Beloved Midnight Movie Written, Directed, Produced-By, and Starring Tommy Wiseau as Johnny, Juliette Danielle as Lisa, Greg Sestero as Mark, Philip Haldiman as Denny, Kyle Vogt as Peter, Dan Janjigian as Chris R., Carolyn Minnott as Claudette, and Robyn Paris & Mike Holmes as Michelle & Mike!! Oh, Hai Doggy! Tara REACTS to all the Best Moments including The Flower Shop Scene, I Did Not Hit Her, Hai Doggy, The Rooftop Scene, Football Scene, The Alleyway Scene, Me Underwears, and BEYOND! Follow Tara Erickson: Youtube: https://www.youtube.com/@TaraErickson Instagram: https://www.instagram.com/taraerickson/ Twitter: https://twitter.com/thetaraerickson Support The Channel By Getting Some REEL REJECTS Apparel! https://www.rejectnationshop.com/ Music Used In Manscaped Ad: Hat the Jazz by Twin Musicom is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ POWERED BY @GFUEL Visit https://gfuel.ly/3wD5Ygo and use code REJECTNATION for 20% off select tubs!! Head Editor: https://www.instagram.com/praperhq/?hl=en Co-Editor: Greg Alba Co-Editor: John Humphrey Music In Video: Airport Lounge - Disco Ultralounge by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Ask Us A QUESTION On CAMEO: https://www.cameo.com/thereelrejects Follow TheReelRejects On FACEBOOK, TWITTER, & INSTAGRAM: FB: https://www.facebook.com/TheReelRejects/ INSTAGRAM: https://www.instagram.com/reelrejects/ TWITTER: https://twitter.com/thereelrejects Follow GREG ON INSTAGRAM & TWITTER: INSTAGRAM: https://www.instagram.com/thegregalba/ TWITTER: https://twitter.com/thegregalba Learn more about your ad choices. Visit megaphone.fm/adchoices
Bring back the Draft? :: Some in DC discuss bringing back the draft :: Moms for Liberty insist a book on banning books be banned (tongue twister) :: Nashua protests regarding Mayor denying flying a Revolutionary War flag because it flew on 1/6 :: Chris R., Lori, Riley Support Riley on Patreon: https://www.patreon.com/crblake86 Send Bitcoin: 1MnoYoPirXQHfhknDxbDHhLsF9u7kUggKy Send Bitcoin Cash: qpp62s8uupdqkrfew7vgp805pnsh5jk2ncnfkndwrd Dash: XpApo1jcPzTJyLLB6G8GJ7DoW9CGjcV5xT Ether: 0xFb1a23163bea743BB79B93849D864ad070597855 Lightcoin ltc1q6ygsamrkwl0at93datyqfh47z4crg4jkg4fx30
Bring back the Draft? :: Some in DC discuss bringing back the draft :: Moms for Liberty insist a book on banning books be banned (tongue twister) :: Nashua protests regarding Mayor denying flying a Revolutionary War flag because it flew on 1/6 :: Chris R., Lori, Riley
Speakers for AI Engineer World's Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we've been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we'll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré's spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we'll be back in 2025
Chris R. gives a pitch on Steps Four and Five at the Appalachian Roundup in 2024. Find us at https://maddogspeakers.com/.
Over the last five days I have uploaded a workshop led by Charlie P along with his wife Katie and Chris R, presenting a workshop on Taking a Newcomer Through the Steps. Before the event began each took about 40 min and told some of their story, this is that portion of the event, it runs just a bit over 2 hours. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/event (link fixed) Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/event If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
Joe, Claira, and Sydney are back from Austin, TX where they saw an INSANE amount of movies. No spoilers are in this episode, just reactions to all the big films! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
This is part five of a five part workshop led by Charlie P along with his wife Katie and also Chris R, they are presenting a workshop on Taking a Newcomer Through the Steps, from the perspective of doing it with a sense of urgency. They do some Q&A at the end of the session, the questions were impossible to hear, I have removed all the dead space and they do a very good job of repeating the question before answering. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014, someone had emailed me asking about a slightly later version of this workshop, that reminded me I had this on my list to use...so here it is. The event itself is 8 recordings not 5, the first three are the speakers each telling their stories for about 40 min. I am going to combine those speaks into one episode and upload that the day after the actual workshop concludes, in reality the workshop opened with those. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/event (link fixed) Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/event If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
This is part four of a five part workshop led by Charlie P along with his wife Katie and also Chris R, they are presenting a workshop on Taking a Newcomer Through the Steps, from the perspective of doing it with a sense of urgency. They do some Q&A at the end of the session and the questions were impossible to hear, I have removed all the dead space and they do a very good job of repeating the question before answering. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014, someone had emailed me asking about a slightly later version of this workshop, that reminded me I had this on my list to use...so here it is. The event itself is 8 recordings not 5, the first three are the speakers each telling their stories for about 40 min. I am going to combine those speaks into one episode and upload that the day after the actual workshop concludes, in reality the workshop opened with those. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/event (link fixed) Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/event If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
This is part three of a five part workshop led by Charlie P along with his wife Katie and also Chris R, they are presenting a workshop on Taking a Newcomer Through the Steps, from the perspective of doing it with a sense of urgency. They do some Q&A at the end of the session and the questions were impossible to hear, I have removed all the dead space and they do a very good job of repeating the question before answering. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014, someone had emailed me asking about a slightly later version of this workshop, that reminded me I had this on my list to use...so here it is. The event itself is 8 recordings not 5, the first three are the speakers each telling their stories for about 40 min. I am going to combine those speaks into one episode and upload that the day after the actual workshop concludes, in reality the workshop opened with those. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/events (link fixed) Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/events If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
This is part two of a five part workshop led by Charlie P along with his wife Katie and also Chris R, they are presenting a workshop on Taking a Newcomer Through the Steps, from the perspective of doing it with a sense of urgency. They do some Q&A at the end of the session and the questions were impossible to hear, I have removed all the dead space and they do a very good job of repeating the question before answering. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014, someone had emailed me asking about a slightly later version of this workshop, that reminded me I had this on my list to use...so here it is. The event itself is 8 recordings not 5, the first three are the speakers each telling their stories for about 40 min. I am going to combine those speaks into one episode and upload that the day after the actual workshop concludes, in reality the workshop opened with those. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/events (link fixed) Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/events If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
This is part one of a five part workshop led by Charlie P along with his wife Katie and also Chris R, they are presenting a workshop on Taking a Newcomer Through the Steps, from the perspective of doing it with a sense of urgency. This was recorded at the London Ontario Primary Purpose Group in Canada in 2014, someone had emailed me asking about a slightly later version of this workshop, that reminded me I had this on my list to use...so here it is. The event itself is 8 recordings not 5, the first three are the speakers each telling their stories for about 40 min. I am going to combine those speaks into one episode and upload that the day after the actual workshop concludes, in reality the workshop opened with those. If you are anywhere near Atlanta Georgia on the weekend of 3/22 the Atlanta Mens Workshop is taking place, its a twice a year event going since 1982. Details: https://scast.us/amw If you have any AA or Al-Anon event coming up let me know I am happy to help get the word out. https://scast.us/events Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/events If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2500+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
Joe, Sydney, and Claira recap the winners of the 96th Oscars! Joe also discusses his experience HOSTING on the red carpet! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
***This episode does contain spoilers for Dune Part 2*** MAY THY KNIFE CHIP AND SHATTER Dune Part 2 is officially out, so Joe is joined by Claira AND Sydney to discuss the latest sequel. Where does it rank in Denis Villeneuve's filmography? When were you sold on Dune Part 2? Is Dune the next Lord of the Rings? FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema Free episodes will always be available, but any subscriptions are greatly appreciated! CHECK OUT OUR MERCH https://houseofcinemapod.com/ THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Her web connects us all... Sydney and Joe tackle the lastest movie to break down what exactly happened to Madame Web. FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Claira and Joe discuss the latest Zelda Williams directed and Diablo Cody written movie, Lisa Frankenstein! Will it go down as a cult classic similar to Jennifer's Body? The second half of the pod, Claira and Joe discuss their 5 favorite Rom Coms within the last 5ish years! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ”Beyond Transformers & GPUs”, and “Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer
**THIS EPISODE CONTAINS SPOILERS** Joe and Sydney discuss who the REAL agent Argylle is...but also what exactly happened to make Argylle the first big movie of 2024 that is both a critical and potentially financial flop. FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Sydney and Joe have returned from Park City where they saw over 20+ feature films and documentaries! They discuss the ones they loved the most and the ones you need to keep an eye out for! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
The Film Drunk returns to discuss our FINAL predictions for what will be nominated for the 2024 Oscars! Check out The Film Drunk on Letterboxd YouTube Twitter FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Joe welcomes back Megan and Bobbi to the House of Cinema, this time, to draft movies that have a "Rotten" score on Rotten Tomatoes! Let us know who you think won in the poll! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Joe & Sydney saw the first big movie of 2024, the new Blumhouse horror movie Night Swim! Does it defy the odds or does it keep the Trashuary tradition going? FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Joe is joined by Johnathan aka The Film Drunk on YouTube, to break down the race for Best Picture and who currently has the lead but also...who might be a sneaky dark horse. Check out The Film Drunk on Letterboxd YouTube Twitter FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Sydney & Joe discuss the final movie in the DCEU. How does it hold up as sequel to a billion dollar movie? A win or a loss for DCEU fans? We also discuss our favorite DCEU movies! FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Chris R from Ingram Texas leads this workshop on sponsorship at the 2nd annual Oswaldtwistle Gathering in Oswaldtwistle in the United Kingdom in January of 2011. For those of you unfamiliar with Chris he speaks and leads workshops all over the world and has been for many years. 90 min. Email: sobercast@gmail.com Support Sober Cast: https://sobercast.com/donate AA Event List: https://scast.us/events If you have an AA roundup, retreat, convention or workshop coming up, we would be happy to give you a shout out here on the podcast and list the event on the Sober Cast website. Visit the link above and look for "Submit Your Event" in the blue box. Sober Cast has 2400+ episodes available, visit SoberCast.com to access all the episodes where you can easily find topics or specific speakers using tags or search. https://sobercast.com
Claira and Joe discuss the latest new release, Wonka! How does it compare to the other movies about Willy Wonka? How is Timothy Chalamet as the iconic Wonka? What do we want to see next from Chalamet? FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu
Claira & Joe discuss the latest Miyazaki movie, The Boy & the Heron! New masterpiece? Oscar potential? Where is it in our Miyazaki rankings? Listen and find out! **********************************************FOLLOW US ON SOCIAL MEDIAJoe Letterboxd TikTok InstagramClaira Letterboxd TikTokSydney Letterboxd Youtube IG TikTok Intro music created by Taylor Hollingsworth Instagram **********************************************CHECK US OUT ON PATREONpatreon.com/HouseofCinema- Exclusive episodes on the MCU, Anime, Horror movies, and a lot more! Free episodes will always be available, but any subscriptions are greatly appreciated! THANK YOU TO OUR PATRONSBryan S, Andrew T, Brandon L, Chance G, David P, Anthony S, Jordan G, TiAnna P, Parker J, Ryan F, Maia P, Karlton L, Eryn M, Kat, Will TG, Dominque R, Elmina, Taylor H, Anthony P, Isaac H, Julia C, Flynn L, Diego V, Indigo, Gubberism, Brian S, Davis M, Samantha E, Chris R, Brandon M, Juan B, Tabitha E, Alyssa S, Colin J, Sarah V, Hutson B, Gracie, Harrison R, Bruce C, Jason R, Chance B, Gee M, Spencer M, Andrea, Sabrena, Jack H, Matt K, Chris V, Priya D, Jacob R, Haven C, Emma S, Liam A, Caitlyn-Anne, Connor Z, Logan L, Noah M, Lex, DJ R, Arqy, Nathan D, Joe R, Toby C, Garrett H, Helen K, Addison, Zachary M, Nate C, Emily B, Malik W, Nick R, Santiago, Reagan, Renee D, Israel, Nia R, Janou, Mason D, Brandon A, Des M, Jacob D, Dakota B, Sam P, Alexandra T, Jacob K, Samuel M, Jenna S, Melissa Q, Auds T, Logan Lu