POPULARITY
Vegn renovà il sistem fiscal svizzer? Sch'i va tenor il parlament, lura gea. Ma l'imposiziun da taglia individuala sto era vegnir acceptada dal pievel. Tgi profitescha, tgi perda? Quai discutan ils cussegliers dals chantuns Martin Schmid (PLD) e Stefan Engler (Il Center) en il «Controvers». -------------------------------------------------------------------------------------------- Wird das Schweizer Steuersystem renoviert? Individualbesteuerung – ein zentrales Reformprojekt der laufenden Legislatur sorgt für Zündstoff: Wer gewinnt, wer verliert, und was kostet es den Staat? In der Sendung «Controvers» diskutieren FDP-Ständerat Martin Schmid und Mitte-Ständerat Stefan Engler.
Et ce qui devait arriver... arriva. La police, les juges, le tribunal et tout le tralala. On va pas vous mentir, si nous avons toujours anticipé quelques éventuelles difficultés avec la justice, nous n'avions pas imaginé que ce serait à cause d'un article sur l'architecture et le greenwashing ou pour avoir publié le droit de réponse d'un collectif féministe. On en est pourtant là, on vous explique tout cela !Tout est (à peu près) bien expliqué dans l'audio ci-dessus, mais si vous préférez lire, une synthèse rapide.En juin 2023, nous avons publié cet excellent article du chercheur et architecte Mathias Rollot : Architecture et greenwashing ou comment biodiversifier le béton. Quelques jours plus tard, l'avocat de l'agence ChartierDalix, notamment connu pour sa défense de la télé-réalité et de Gérard Depardieu, nous envoie une mise en demeure. L'agence dont les pratiques écologiques sont largement évoquées dans l'article, nous somme de le dépublier sous peine de poursuites. Le ton étant grossier et la tentative d'intimidation loufoque, nous faisons le pari qu'ils n'oseront pas se tourner en ridicule au point de nous assigner en justice, mais finalement si. Un article du monde relate cette affaire ici. Une tribune de soutien signée par des centaines de chercheurs et architectes a été publié là. Ainsi que divers éditoriaux dans des revues d'architecture comme ici.Beaucoup moins lunaire, une seconde plainte a été déposée contre lundimatin il y a maintenant deux ans. À la suite d'un article élogieux à propos du film Boum Boum de Laurie Lassalle, un collectif de femme nous avait demandé de publier un droit de réponse, non pas par rapport à notre article mais du fait de la présence dans le film d'un homme accusé de viol par l'une d'entre elles et qui y profère des propos qu'elles jugent répugnants. L'homme en question a choisi de déposer plainte pour diffamation contre lundimatin, les autrices du texte n'étant pas identifiables a priori. Un procès se tiendra le 30 septembre prochain au TGI de Paris.Comme nous l'exposons dans la vidéo, il y a un véritable enjeu à gagner dans ces deux procédures. Cependant, les procès en diffamations relèvent d'un droit particulièrement technique, ce qui nécessite beaucoup de temps de travail, des avocats et du temps de travail d'avocat. C'est pourquoi nous lançons une cagnotte pour nous aider à gagner.Merci !Le lien vers la cagnotte est iciVous aimez ou au moins lisez lundimatin et vous souhaitez pouvoir continuer ? Ca tombe bien, pour fêter nos dix années d'existence, nous lançons une grande campagne de financement. Pour nous aider et nous encourager, C'est par ici.
The VoiceOver Pod made possible by Such A Voice with your host Justine Reiss
On this podcast episode I dive deep with Wesley Stevens, the founder of Vox, Inc. Wes shares his fascinating journey from starting as an intern in the voiceover industry to becoming a highly successful agent. In this episode, explore the evolution of voiceover, the impact of technology on the industry, and what it takes to make it as a voiceover artist today. Whether you're a seasoned professional or new to the scene, this episode is packed with insights and advice from one of the industry's most experienced agents. Wes has advocated for talent as a profession since 1994. He started in Columbus, Ohio at a small agency booking variety acts before joining Talent Group, Inc. (TGI) in Los Angeles in 1995 as an assistant. Seven years later, he acquired TGI's voice-over department and launched VOX, Inc. Wes has represented some of the biggest names in film, television, social media and music, alongside brands including Pixar, Disney, DreamWorks, Fox, GE, Apple, Dodge, Jaguar and Sprint. His first booking of note was placing David Hyde Pierce in "A Bugs Life," and the journey took him back to Pixar in 2009, when he placed Ed Asner in "Up." Along that road there have been many campaigns, series, films, and fond memories. Having built the company off the springboard of his specialization in animation and gaming, the company's success led to diversification into celebrity endorsements and innovative deals in hosting, podcasting, AI and other emerging media and technologies. Wes helps talented people create powerful, equitable, and long-term relationships. He thrills at the opportunity to connect the right talented people with each other. Wes was born in Virginia. He is a military brat, an Eagle Scout and a graduate of the University of Virginia. Wes has run seven full marathons from Honolulu to Florence, Italy. He is very involved with Amazon Conservation Team and with Best Buddies, a global charity promoting the full integration of individuals with intellectual disabilities into mainstream society. He is passionate about creativity in all its expressions. Wes resides in Los Angeles with his husband and two pups. Want to connect with Wes? You can find him at: Vox Website: https://voxusa.net/#about Vox IG: https://www.instagram.com/vox_inc_usa/ Want to connect with Justine? You can find her at: Website: https://empoweredvoicecoach.com/ Email: justine@suchavoice.com IG: @justinereiss And to receive an INTRO TO VOICEOVER webinar email her at justine@suchavoice.com I hope you enjoy this powerful and inspirational episode just as much as I did! If you did please leave a review for us! Check out this recent incredible review of The VoiceOver Pod: “The Queen, Justine Reiss This wonderful lady is truly the Queen of our business. Justine is sincere, honest, exhilarating, exuberant, polarizing, and energetic!! You inevitably get caught up in the moment with her enthusiasm and love for the craft! She and the guests on the podcast have a genuine vibe that you can feel and hear in their voices! As a person, I am honored to have Justine as a mentor and guide as a newcomer to the voice acting industry. She is truly one of the best in what she does!" - Dave Kaleel Tune in to the full episode on Spotify, Apple, YouTube, or wherever you like to listen to your podcasts Thank you for listening! -- Check out our free PDF with pro-tips from real working voiceover actors here: bit.ly/3hT7ylz Want to learn more about voiceover? Check out Justine's introductory webinar here: https://go.oncehub.com/YoureOnTheAirWithJustine
What does it take to be a good coach? What about one of the most successful coaches in Australian history? I asked former Australian netball coach, Lisa Alexander, about the coaching philosophies she implemented at the Netball Diamonds that led to her success. Lisa also talks about how being consistent, coaching the person first, then athlete, and having a bit of an ego are all very important parts of being a successful coach.Listen to the full interview episode at #142. You can find Lisa at the TGI website: https://tgisport.com.au/talent/lisa-alexander/Or the Celebrity Speakers website: https://www.celebrityspeakers.com.au/speakers/lisa-alexander/Connect with Lisa on LinkedIn: https://www.linkedin.com/in/lisa-alexander-am-a83bb378/?originalSubdomain=au Use Code "PQPODCAST10" to get 10% off your Lumo Coffee order:https://lumocoffee.com/ Interested in sharing your story? Email Producer Shannon at support@performanceintelligence.com today with your story and contact details.Learn more about Andrew and Performance Intelligence: https://performanceintelligence.com/Find out more about Andrew's Keynotes : https://performanceintelligence.com/keynotes/Follow Andrew May: https://www.instagram.com/andrewmay/If you enjoy the podcast, we would really appreciate you leaving a short review on Apple Podcasts, Spotify or Google Play. It takes less than 60 seconds and really helps us build our audience and continue to provide high quality guests.
Tgi era quest'emprima dunna rumantscha ch'ha publitgà atgnas poesias – e quai sut il pseudonim «Clio»? Pertge èn ses diaris scrits per gronda part en talian? Tge mument da sia biografia ha fatg ch'ella è daventada ina pioniera dal moviment feministic en l'Engiadina?
The VoiceOver Pod made possible by Such A Voice with your host Justine Reiss
On this podcast episode I dive deep with Wesley Stevens, the founder of Vox, Inc. Wes shares his fascinating journey from starting as an intern in the voiceover industry to becoming a highly successful agent. In this episode, explore the evolution of voiceover, the impact of technology on the industry, and what it takes to make it as a voiceover artist today. Whether you're a seasoned professional or new to the scene, this episode is packed with insights and advice from one of the industry's most experienced agents. Wes has advocated for talent as a profession since 1994. He started in Columbus, Ohio at a small agency booking variety acts before joining Talent Group, Inc. (TGI) in Los Angeles in 1995 as an assistant. Seven years later, he acquired TGI's voice-over department and launched VOX, Inc. Wes has represented some of the biggest names in film, television, social media and music, alongside brands including Pixar, Disney, DreamWorks, Fox, GE, Apple, Dodge, Jaguar and Sprint. His first booking of note was placing David Hyde Pierce in "A Bugs Life," and the journey took him back to Pixar in 2009, when he placed Ed Asner in "Up." Along that road there have been many campaigns, series, films, and fond memories. Having built the company off the springboard of his specialization in animation and gaming, the company's success led to diversification into celebrity endorsements and innovative deals in hosting, podcasting, AI and other emerging media and technologies. Wes helps talented people create powerful, equitable, and long-term relationships. He thrills at the opportunity to connect the right talented people with each other. Wes was born in Virginia. He is a military brat, an Eagle Scout and a graduate of the University of Virginia. Wes has run seven full marathons from Honolulu to Florence, Italy. He is very involved with Amazon Conservation Team and with Best Buddies, a global charity promoting the full integration of individuals with intellectual disabilities into mainstream society. He is passionate about creativity in all its expressions. Wes resides in Los Angeles with his husband and two pups. Want to connect with Wes? You can find him at: Vox Website: https://voxusa.net/#about Vox IG: https://www.instagram.com/vox_inc_usa/ Want to connect with Justine? You can find her at: Website: https://empoweredvoicecoach.com/ Email: justine@suchavoice.com IG: @justinereiss And to receive an INTRO TO VOICEOVER webinar email her at justine@suchavoice.com I hope you enjoy this powerful and inspirational episode just as much as I did! If you did please leave a review for us! Check out this recent incredible review of The VoiceOver Pod: “The Queen, Justine Reiss This wonderful lady is truly the Queen of our business. Justine is sincere, honest, exhilarating, exuberant, polarizing, and energetic!! You inevitably get caught up in the moment with her enthusiasm and love for the craft! She and the guests on the podcast have a genuine vibe that you can feel and hear in their voices! As a person, I am honored to have Justine as a mentor and guide as a newcomer to the voice acting industry. She is truly one of the best in what she does!" - Dave Kaleel Tune in to the full episode on Spotify, Apple, YouTube, or wherever you like to listen to your podcasts Thank you for listening! -- Check out our free PDF with pro-tips from real working voiceover actors here: bit.ly/3hT7ylz Want to learn more about voiceover? Check out Justine's introductory webinar here: https://go.oncehub.com/YoureOnTheAirWithJustine
Lisa Alexander is one of Australia's greatest sports coaches. From leading the Australian Diamonds to global netball impact, this isn't just about netball; it's a masterclass in coaching and leadership. Hear Lisa's journey, from her early coaching days to achieving an 81% win rate with the Diamonds, winning gold at the Commonwealth Games and World Cup.In this episode Andrew and Lisa discuss:00:20 Lisa is one of the most underrated coaches in Australia, having a chip on your shoulder and the best coaches usually weren't the best players.6:40 Staying grounded and being open to learning, reaching out to Wayne Bennett and Craig Bellamy and getting her learning outside of school and university.10:45 The 2 books Lisa could potentially write, how Lisa was able to study, work and raise a child and being achievement oriented to try and impress her dad.16:45 Why coaching and teaching work so well together, creating teams in the classroom and spending a year following the Australian netball coach around.21:45 The ability to be self-aware is the most important, Lisa's biggest influences and Lisa's advice to people who want to get into coaching.32:45 Craig Harper working for Lisa for 10c an hour, focusing on the person first and the athlete second and you have to have a bit of ego as a coach, but not too much.37:25 The introduction of “Sisters in Arms” at The Diamonds, falling in love with rugby union at Twickenham and respecting the culture of the country that you are coming in to.44:30 Money can lead to mediocrity, encouraging players to do coaching courses as they are playing and getting perspective outside your own environment.49:00 Family first is a key tenant of Lisa's philosophy, Lisa's work with corporate teams and where you can find Lisa.You can find Lisa at the TGI website: https://tgisport.com.au/talent/lisa-alexander/Or the Celebrity Speakers website: https://www.celebrityspeakers.com.au/speakers/lisa-alexander/Connect with Lisa on LinkedIn: https://www.linkedin.com/in/lisa-alexander-am-a83bb378/?originalSubdomain=au Use Code "PIPODCAST10" to get 10% off your Lumo Coffee order:https://lumocoffee.com/Learn more about Andrew and Performance Intelligence: https://performanceintelligence.com/Find out more about Andrew's Keynotes : https://performanceintelligence.com/keynotes/Follow Andrew May: https://www.instagram.com/andrewmay/If you enjoy the podcast, we would really appreciate you leaving a short review on Apple Podcasts, Spotify or Google Play. It takes less than 60 seconds and really helps us build our audience and continue to provide high quality guests.
We discuss which players need to step up their respective games in 2025 or it will likely be their last season with the team. We end on a slew of movie trivia questions from the live stream. Enjoy! Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
In this eye-opening episode of our Brand Series, restaurant industry experts Paul Barron and Paul Molinari dissect the current crisis in casual dining and reveal why some chains are thriving while others file for bankruptcy. Discover how Chili's achieved a remarkable 31% sales increase through strategic social media targeting and menu innovation, while TGI Fridays and Hooters struggle to connect with millennial consumers. The hosts analyze Red Lobster's repositioning strategy, debate the controversial Hooters rebranding plan, and explore how economic headwinds are creating recession indicators even for giants like McDonald's. Don't miss these critical insights on brand transformation, consumer behavior shifts, and the technological innovations poised to reshape restaurants in 2026-2027.~This episode is sponsored by: Gusto → https://gusto.pxf.io/PBN ~#1 rated HR platform for payroll, benefits, and moreWith Gusto's easy-to-use platform, you can empower your people and push your business forward. See why over 400,000 businesses choose Gusto.RestaurantBrandSeries #CasualDiningCrisis #FoodServiceFutureGet Your Podcast Now! Are you a hospitality or restaurant industry leader looking to amplify your voice and establish yourself as a thought leader? Look no further than SavorFM, the premier podcast platform designed exclusively for hospitality visionaries like you. Take the next step in your industry leadership journey – visit https://www.savor.fm/Capital & Advisory: Are you a fast-casual restaurant startup or a technology innovator in the food service industry? Don't miss out on the opportunity to tap into decades of expertise. Reach out to Savor Capital & Advisory now to explore how their seasoned professionals can propel your business forward. Discover if you're eligible to leverage our unparalleled knowledge in food service branding and technology and take your venture to new heights.Don't wait – amplify your voice or supercharge your startup's growth today with Savor's ecosystem of industry-leading platforms and advisory services. Visit https://www.savor.fm/capital-advisory
Jerry takes you through the first three picks of the draft which culminates in the Giants selecting perhaps the draft's most talented player in Penn State's Abdul Carter. Enjoy, folks.Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
There' an exciting feeling around this franchise right now. We discuss the selection of Adbul Carter and the trade up for Jaxson Dart. We also look ahead to Day 2 and where the organization may go next. Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Tgi che va cun skis sto dar in'egliada sin il privel da lavinas. E per pudair valitar lez sto l'Institut per la perscrutaziun da naiv e lavinas (SLF) far differentas mesiraziuns e quai di per di. Il «Minisguard» ha accumpagnà Chasper Buchli dal SLF sin l'areal da perscrutaziun en las muntognas da Tavau. Ed il cool – i na va betg be per guardar quanta naiv ch'igl ha dà e sche quella è loma u bletscha, mabain anc per in'entira massa autras chaussas. La circulaziun da l'aua Sin noss mund datti radund 1,4 trilliardas liters aua, la gronda part en furma d'aua da sal en las mars. Tut quest'aua fa part da la circulaziun d'aua. Grazia a la chalira dal sulegl è ella permanentamain en moviment e sa mida adina puspè da la furma liquida en vapur ed enavos. Ma tge capita precis?
In this episode of the Becker's Healthcare Podcast, Erica Carbajal speaks with Jennifer Baron, Chief Experience Officer at NRC Health, and Kathryn Peisert, Editor in Chief & Senior Director at TGI, about the critical role of trust in shaping the future of patient experience. Drawing from NRC Health's 2025 Experience Perspective report, they explore how health systems can strategically embed trust into their culture, improve engagement, and stay ahead of shifting expectations.This episode is sponsored by NRC Health.
The Giants sent a fleet of folks to Colorado's Pro Day to watch Sheduer Sanders....or is it Travis Hunter? Or, both? Enjoy, folksSign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Dave Syvertsen of Ourlads returns to talk all things Giants draft. You don't want to miss this one, folks. Enjoy. Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
How long is Joe Schoen going to wait for Aaron Rodgers? Is it time to hand the keys to Russell Wilson or another QB (for now)? We discuss. Enjoy.Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in Bonus Credits and a freepick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We talk Giants draft with special guest Ric Serritella. Enjoy, folks.Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in Bonus Credits and a freepick: underdogfantasy.com or download the appAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Jorge Andrés Henao, gerente general de TGI by Diario La república
We return to the live stream to discuss the Joe Schoen presser, our defensive free agent wish list, and we read your comments. Best of luck, @Bret_Gibson. Your wife is an AMAZING person! Enjoy.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
On this live stream we review the free agency possibilities on the offensive side of the ball. At the end of the podcast, Jerry gives his review of "Becoming Led Zeppelin." Enjoy.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
This nightmarish season finally ends with our archrivals winning the Super Bowl (naturally!) and a Giants legend waving their flag. Make it stop!!! Will Papa joins us to talk about his new venture -- fourthandgoal.net Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Ourlads' Dave Syvertsen joins us to discuss the QBs at the top of the draft as well as some other first-round options for the GMEN. If the Chiefs beat the Eagles, it's 365 STRAIGHT DAYS OF TAYLOR SWIFT FOR JERRY. You don't want to miss this episode, folks. Enjoy. Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The Giants hire a new position coach, we discuss some possibilities at No. 3, and we discuss the upcoming Conference Championship games. Enjoy. Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Jerome Henderson is out (hmm..), we review the Giants' unrestricted free agents and discuss who we should keep vs. who we should launch, and we make our picks for this playoff weekend. Enjoy.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The NY Post's Ryan Dunleavy joins the podcast to discuss the John Mara, Joe Schoen, and Brian Daboll pressers. We enjoy Ryan's perspectives and appreciate him lending us his opinions as well as his time.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The Giants lose to the Eagles and secure the third overall pick in the 2025 NFL Draft. We discuss the future of the Schoen/Daboll regime. This disaster of a season is finally over, folks.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Social media is on fire as the Giants win but drop from 1 to 4 in the 2025 draft order. We preview the matchup with Philly and discuss how long the Eagles will play their starters. We also preview the rest of Week 18. Hang in, folks. For YouTube and podcast:Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver. Today, we're proud to share Loubna's highly anticipated talk (slides here)!Synthetic DataWe called out the Synthetic Data debate at last year's NeurIPS, and no surprise that 2024 was dominated by the rise of synthetic data everywhere:* Apple's Rephrasing the Web, Microsoft's Phi 2-4 and Orca/AgentInstruct, Tencent's Billion Persona dataset, DCLM, and HuggingFace's FineWeb-Edu, and Loubna's own Cosmopedia extended the ideas of synthetic textbook and agent generation to improve raw web scrape dataset quality* This year we also talked to the IDEFICS/OBELICS team at HuggingFace who released WebSight this year, the first work on code-vs-images synthetic data.* We called Llama 3.1 the Synthetic Data Model for its extensive use (and documentation!) of synthetic data in its pipeline, as well as its permissive license. * Nemotron CC and Nemotron-4-340B also made a big splash this year for how they used 20k items of human data to synthesize over 98% of the data used for SFT/PFT.* Cohere introduced Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress observing gains of up to 56.5% improvement in win rates comparing multiple teachers vs the single best teacher model* In post training, AI2's Tülu3 (discussed by Luca in our Open Models talk) and Loubna's Smol Talk were also notable open releases this year.This comes in the face of a lot of scrutiny and criticism, with Scale AI as one of the leading voices publishing AI models collapse when trained on recursively generated data in Nature magazine bringing mainstream concerns to the potential downsides of poor quality syndata:Part of the concerns we highlighted last year on low-background tokens are coming to bear: ChatGPT contaminated data is spiking in every possible metric:But perhaps, if Sakana's AI Scientist pans out this year, we will have mostly-AI AI researchers publishing AI research anyway so do we really care as long as the ideas can be verified to be correct?Smol ModelsMeta surprised many folks this year by not just aggressively updating Llama 3 and adding multimodality, but also adding a new series of “small” 1B and 3B “on device” models this year, even working on quantized numerics collaborations with Qualcomm, Mediatek, and Arm. It is near unbelievable that a 1B model today can qualitatively match a 13B model of last year:and the minimum size to hit a given MMLU bar has come down roughly 10x in the last year. We have been tracking this proxied by Lmsys Elo and inference price:The key reads this year are:* MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases* Apple Intelligence Foundation Language Models* Hymba: A Hybrid-head Architecture for Small Language Models* Loubna's SmolLM and SmolLM2: a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters on the pareto efficiency frontier.* and Moondream, which we already covered in the 2024 in Vision talkFull Talk on YouTubeplease like and subscribe!Timestamps* [00:00:05] Loubna Intro* [00:00:33] The Rise of Synthetic Data Everywhere* [00:02:57] Model Collapse* [00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks* [00:12:36] DCLM, Nemotron-CC* [00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage* [00:16:17] Smol Models* [00:18:24] On Device Models* [00:22:45] Smol Vision Models* [00:25:14] What's NextTranscript2024 in Synthetic Data and Smol Models[00:00:00] [00:00:05] Loubna Intro[00:00:05] Speaker: I'm very happy to be here. Thank you for the invitation. So I'm going to be talking about synthetic data in 2024. And then I'm going to be talking about small on device models. So I think the most interesting thing about synthetic data this year is that like now we have it everywhere in the large language models pipeline.[00:00:33] The Rise of Synthetic Data Everywhere[00:00:33] Speaker: I think initially, synthetic data was mainly used just for post training, because naturally that's the part where we needed human annotators. And then after that, we realized that we don't really have good benchmarks to [00:01:00] measure if models follow instructions well, if they are creative enough, or if they are chatty enough, so we also started using LLMs as judges.[00:01:08] Speaker: Thank you. And I think this year and towards the end of last year, we also went to the pre training parts and we started generating synthetic data for pre training to kind of replace some parts of the web. And the motivation behind that is that you have a lot of control over synthetic data. You can control your prompt and basically also the kind of data that you generate.[00:01:28] Speaker: So instead of just trying to filter the web, you could try to get the LLM to generate what you think the best web pages could look like and then train your models on that. So this is how we went from not having synthetic data at all in the LLM pipeline to having it everywhere. And so the cool thing is like today you can train an LLM with like an entirely synthetic pipeline.[00:01:49] Speaker: For example, you can use our Cosmopedia datasets and you can train a 1B model on like 150 billion tokens that are 100 percent synthetic. And those are also of good quality. And then you can [00:02:00] instruction tune the model on a synthetic SFT dataset. You can also do DPO on a synthetic dataset. And then to evaluate if the model is good, you can use.[00:02:07] Speaker: A benchmark that uses LLMs as a judge, for example, MTBench or AlpacaEvil. So I think this is like a really mind blowing because like just a few years ago, we wouldn't think this is possible. And I think there's a lot of concerns about model collapse, and I'm going to talk about that later. But we'll see that like, if we use synthetic data properly and we curate it carefully, that shouldn't happen.[00:02:29] Speaker: And the reason synthetic data is very popular right now is that we have really strong models, both open and closed. It is really cheap and fast to use compared to human annotations, which cost a lot and take a lot of time. And also for open models right now, we have some really good inference frameworks.[00:02:47] Speaker: So if you have enough GPUs, it's really easy to spawn these GPUs and generate like a lot of synthetic data. Some examples are VLM, TGI, and TensorRT.[00:02:57] Model Collapse[00:02:57] Speaker: Now let's talk about the elephant in the room, model [00:03:00] collapse. Is this the end? If you look at the media and all of like, for example, some papers in nature, it's really scary because there's a lot of synthetic data out there in the web.[00:03:09] Speaker: And naturally we train on the web. So we're going to be training a lot of synthetic data. And if model collapse is going to happen, we should really try to take that seriously. And the other issue is that, as I said, we think, a lot of people think the web is polluted because there's a lot of synthetic data.[00:03:24] Speaker: And for example, when we're building fine web datasets here at Guillerm and Hinek, we're interested in like, how much synthetic data is there in the web? So there isn't really a method to properly measure the amount of synthetic data or to save a webpage synthetic or not. But one thing we can do is to try to look for like proxy words, for example, expressions like as a large language model or words like delve that we know are actually generated by chat GPT.[00:03:49] Speaker: We could try to measure the amount of these words in our data system and compare them to the previous years. For example, here, we measured like a, these words ratio in different dumps of common crawl. [00:04:00] And we can see that like the ratio really increased after chat GPT's release. So if we were to say that synthetic data amount didn't change, you would expect this ratio to stay constant, which is not the case.[00:04:11] Speaker: So there's a lot of synthetic data probably on the web, but does this really make models worse? So what we did is we trained different models on these different dumps. And we then computed their performance on popular, like, NLP benchmarks, and then we computed the aggregated score. And surprisingly, you can see that the latest DOMs are actually even better than the DOMs that are before.[00:04:31] Speaker: So if there's some synthetic data there, at least it did not make the model's worse. Yeah, which is really encouraging. So personally, I wouldn't say the web is positive with Synthetic Data. Maybe it's even making it more rich. And the issue with like model collapse is that, for example, those studies, they were done at like a small scale, and you would ask the model to complete, for example, a Wikipedia paragraph, and then you would train it on these new generations, and you would do that every day.[00:04:56] Speaker: iteratively. I think if you do that approach, it's normal to [00:05:00] observe this kind of behavior because the quality is going to be worse because the model is already small. And then if you train it just on its generations, you shouldn't expect it to become better. But what we're really doing here is that we take a model that is very large and we try to distill its knowledge into a model that is smaller.[00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks[00:05:14] Speaker: And in this way, you can expect to get like a better performance for your small model. And using synthetic data for pre-training has become really popular. After the textbooks are all you need papers where Microsoft basically trained a series of small models on textbooks that were using a large LLM.[00:05:32] Speaker: And then they found that these models were actually better than models that are much larger. So this was really interesting. It was like first of its time, but it was also met with a lot of skepticism, which is a good thing in research. It pushes you to question things because the dataset that they trained on was not public, so people were not really sure if these models are really good or maybe there's just some data contamination.[00:05:55] Speaker: So it was really hard to check if you just have the weights of the models. [00:06:00] And as Hugging Face, because we like open source, we tried to reproduce what they did. So this is our Cosmopedia dataset. We basically tried to follow a similar approach to what they documented in the paper. And we created a synthetic dataset of textbooks and blog posts and stories that had almost 30 billion tokens.[00:06:16] Speaker: And we tried to train some models on that. And we found that like the key ingredient to getting a good data set that is synthetic is trying as much as possible to keep it diverse. Because if you just throw the same prompts as your model, like generate like a textbook about linear algebra, and even if you change the temperature, the textbooks are going to look alike.[00:06:35] Speaker: So there's no way you could scale to like millions of samples. And the way you do that is by creating prompts that have some seeds that make them diverse. In our case, the prompt, we would ask the model to generate a textbook, but make it related to an extract from a webpage. And also we try to frame it within, to stay within topic.[00:06:55] Speaker: For example, here, we put like an extract about cardiovascular bioimaging, [00:07:00] and then we ask the model to generate a textbook related to medicine that is also related to this webpage. And this is a really nice approach because there's so many webpages out there. So you can. Be sure that your generation is not going to be diverse when you change the seed example.[00:07:16] Speaker: One thing that's challenging with this is that you want the seed samples to be related to your topics. So we use like a search tool to try to go all of fine web datasets. And then we also do a lot of experiments with the type of generations we want the model to generate. For example, we ask it for textbooks for middle school students or textbook for college.[00:07:40] Speaker: And we found that like some generation styles help on some specific benchmarks, while others help on other benchmarks. For example, college textbooks are really good for MMLU, while middle school textbooks are good for benchmarks like OpenBookQA and Pico. This is like a sample from like our search tool.[00:07:56] Speaker: For example, you have a top category, which is a topic, and then you have some [00:08:00] subtopics, and then you have the topic hits, which are basically the web pages in fine web does belong to these topics. And here you can see the comparison between Cosmopedia. We had two versions V1 and V2 in blue and red, and you can see the comparison to fine web, and as you can see throughout the training training on Cosmopedia was consistently better.[00:08:20] Speaker: So we managed to get a data set that was actually good to train these models on. It's of course so much smaller than FineWeb, it's only 30 billion tokens, but that's the scale that Microsoft data sets was, so we kind of managed to reproduce a bit what they did. And the data set is public, so everyone can go there, check if everything is all right.[00:08:38] Speaker: And now this is a recent paper from NVIDIA, Neumatron CC. They took things a bit further, and they generated not a few billion tokens, but 1. 9 trillion tokens, which is huge. And we can see later how they did that. It's more of, like, rephrasing the web. So we can see today that there's, like, some really huge synthetic datasets out there, and they're public, so, [00:09:00] like, you can try to filter them even further if you want to get, like, more high quality corpses.[00:09:04] Speaker: So for this, rephrasing the web this approach was suggested in this paper by Pratyush, where basically in this paper, they take some samples from C4 datasets, and then they use an LLM to rewrite these samples into a better format. For example, they ask an LLM to rewrite the sample into a Wikipedia passage or into a Q& A page.[00:09:25] Speaker: And the interesting thing in this approach is that you can use a model that is Small because it doesn't, rewriting doesn't require knowledge. It's just rewriting a page into a different style. So the model doesn't need to have like knowledge that is like extensive of what is rewriting compared to just asking a model to generate a new textbook and not giving it like ground truth.[00:09:45] Speaker: So here they rewrite some samples from C4 into Q& A, into Wikipedia, and they find that doing this works better than training just on C4. And so what they did in Nemo Trans CC is a similar approach. [00:10:00] They rewrite some pages from Common Crawl for two reasons. One is to, like improve Pages that are low quality, so they rewrite them into, for example, Wikipedia page, so they look better.[00:10:11] Speaker: And another reason is to create more diverse datasets. So they have a dataset that they already heavily filtered, and then they take these pages that are already high quality, and they ask the model to rewrite them in Question and Answer format. into like open ended questions or like multi choice questions.[00:10:27] Speaker: So this way they can reuse the same page multiple times without fearing like having multiple duplicates, because it's the same information, but it's going to be written differently. So I think that's also a really interesting approach for like generating synthetic data just by rephrasing the pages that you already have.[00:10:44] Speaker: There's also this approach called Prox where they try to start from a web page and then they generate a program which finds how to write that page to make it better and less noisy. For example, here you can see that there's some leftover metadata in the web page and you don't necessarily want to keep that for training [00:11:00] your model.[00:11:00] Speaker: So So they train a model that can generate programs that can like normalize and remove lines that are extra. So I think this approach is also interesting, but it's maybe less scalable than the approaches that I presented before. So that was it for like rephrasing and generating new textbooks.[00:11:17] Speaker: Another approach that I think is really good and becoming really popular for using synthetic data for pre training is basically building a better classifiers. For filtering the web for example, here we release the data sets called fine web edu. And the way we built it is by taking Llama3 and asking it to rate the educational content of web pages from zero to five.[00:11:39] Speaker: So for example, if a page is like a really good textbook that could be useful in a school setting, it would get a really high score. And if a page is just like an advertisement or promotional material, it would get a lower score. And then after that, we take these synthetic annotations and we train a classifier on them.[00:11:57] Speaker: It's a classifier like a BERT model. [00:12:00] And then we run this classifier on all of FineWeb, which is a 15 trillion tokens dataset. And then we only keep the pages that have like a score that's higher than 3. So for example, in our case, we went from 15 trillion tokens to 3. to just 1. 5 trillion tokens. Those are really highly educational.[00:12:16] Speaker: And as you can see here, a fine web EDU outperforms all the other public web datasets by a larger margin on a couple of benchmarks here, I show the aggregated score and you can see that this approach is really effective for filtering web datasets to get like better corpuses for training your LLMs.[00:12:36] DCLM, Nemotron-CC[00:12:36] Speaker: Others also try to do this approach. There's, for example, the DCLM datasets where they also train the classifier, but not to detect educational content. Instead, they trained it on OpenHermes dataset, which is a dataset for instruction tuning. And also they explain like IAM5 subreddits, and then they also get really high quality dataset which is like very information dense and can help [00:13:00] you train some really good LLMs.[00:13:01] Speaker: And then Nemotron Common Crawl, they also did this approach, but instead of using one classifier, they used an ensemble of classifiers. So they used, for example, the DCLM classifier, and also classifiers like the ones we used in FineWebEducational, and then they combined these two. Scores into a, with an ensemble method to only retain the best high quality pages, and they get a data set that works even better than the ones we develop.[00:13:25] Speaker: So that was it for like synthetic data for pre-training.[00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage[00:13:28] Speaker: Now we can go back to post training. I think there's a lot of interesting post training data sets out there. One that was released recently, the agent instructs by Microsoft where they basically try to target some specific skills. And improve the performance of models on them.[00:13:43] Speaker: For example, here, you can see code, brain teasers, open domain QA, and they managed to get a dataset that outperforms that's when fine tuning Mistral 7b on it, it outperforms the original instruct model that was released by Mistral. And as I said, to get good synthetic data, you really [00:14:00] have to have a framework to make sure that your data is diverse.[00:14:03] Speaker: So for example, for them, they always. And then they see the generations on either source code or raw text documents, and then they rewrite them to make sure they're easier to generate instructions from, and then they use that for their like instruction data generation. There's also the Tool3SFT mixture, which was released recently by Allen AI.[00:14:23] Speaker: It's also really good quality and it covers a wide range of tasks. And the way they make sure that this dataset is diverse is by using personas from the persona hub datasets. Which is basically a data set of like I think over a million personas. And for example, in the tool mixture to generate like a new code snippet, they would give like the model persona, for example, a machine learning researcher interested in neural networks, and then ask it to generate like a coding problem.[00:14:49] Speaker: This way you make sure that your data set is really diverse, and then you can further filter the data sets, for example, using the reward models. We also released a dataset called Smalltalk, [00:15:00] and we also tried to cover the wide range of tasks, and as you can see here, for example, when fine tuning Mistral 7b on the dataset, we also outperformed the original Mistral instructs on a number of benchmarks, notably on mathematics and instruction following with ifevil.[00:15:18] Speaker: Another paper that's really interesting I wanted to mention is this one called Multilingual Data Arbitrage by Cohere. And basically they want to generate a data set for post training that is multilingual. And they have a really interesting problem. It's the fact that there isn't like one model that's really good at all the languages they wanted.[00:15:36] Speaker: So what they do is that like they use not just one teacher model, but multiple teachers. And then they have a router which basically sends the prompts they have to all these models. And then they get the completions and they have a reward model that traces all these generations and only keeps the best one.[00:15:52] Speaker: And this is like arbitrage and finance. So well, I think what's interesting in this, it shows that like synthetic data, it doesn't have to come from a single model. [00:16:00] And because we have so many good models now, you could like pull these models together and get like a dataset that's really high quality and that's diverse and that's covers all your needs.[00:16:12] Speaker: I was supposed to put a meme there, but. Yeah, so that was it for like a synthetic data.[00:16:17] Smol Models[00:16:17] Speaker: Now we can go to see what's happening in the small models field in 2024. I don't know if you know, but like now we have some really good small models. For example, Lama 3. 2 1B is. It matches Lama 2. 13b from, that was released last year on the LMSYS arena, which is basically the default go to leaderboard for evaluating models using human evaluation.[00:16:39] Speaker: And as you can see here, the scores of the models are really close. So I think we've made like hugely forward in terms of small models. Of course, that's one, just one data point, but there's more. For example, if you look at this chart from the Quint 2. 5 blog post, it shows that today we have some really good models that are only like 3 billion parameters [00:17:00] and 4 billion that score really high on MMLU.[00:17:03] Speaker: Which is a really popular benchmark for evaluating models. And you can see here that the red, the blue dots have more than 65 on MMLU. And the grey ones have less. And for example, Llama33b had less. So now we have a 3b model that outperforms a 33b model that was released earlier. So I think now people are starting to realize that like, we shouldn't just scale and scale models, but we should try to make them more efficient.[00:17:33] Speaker: I don't know if you knew, but you can also chat with a 3B plus model on your iPhone. For example, here, this is an app called PocketPal, where you can go and select a model from Hugging Face. It has a large choice. For example, here we loaded the 5. 3. 5, which is 3. 8 billion parameters on this iPhone. And we can chat with this and you can see that even the latency is also acceptable.[00:17:57] Speaker: For example, here, I asked it to give me a joke about [00:18:00] NeurIPS. So let's see what it has to say.[00:18:06] Speaker: Okay, why did the neural network attend NeurIPS? Because it heard there would be a lot of layers and fun and it wanted to train its sense of humor. So not very funny, but at least it can run on device. Yeah, so I think now we have good small models, but we also have like good frameworks and tools to use these small models.[00:18:24] On Device Models[00:18:24] Speaker: So I think we're really close to having like really on edge and on device models that are really good. And I think for a while we've had this narrative. But just training larger models is better. Of course, this is supported by science scaling laws. As you can see here, for example, when we scale the model size, the loss is lower and obviously you get a better model.[00:18:46] Speaker: But and we can see this, for example, in the GPT family of models, how we went from just a hundred million parameters to more than a trillion. parameters. And of course, we all observed the performance improvement when using the latest model. But [00:19:00] one thing that we shouldn't forget is that when we scale the model, we also scale the inference costs and time.[00:19:05] Speaker: And so the largest models were are going to cost so much more. So I think now instead of just building larger models, we should be focusing on building more efficient models. It's no longer a race for the largest models since these models are really expensive to run and they require like a really good infrastructure to do that and they cannot run on, for example, consumer hardware.[00:19:27] Speaker: And when you try to build more efficient models that match larger models, that's when you can really unlock some really interesting on device use cases. And I think a trend that we're noticing now is the trend of training smaller models longer. For example, if you compare how much, how long LLAMA was trained compared to LLAMA3, there is a huge increase in the pre training length.[00:19:50] Speaker: LLAMA was trained on 1 trillion tokens, but LLAMA3 8b was trained on 15 trillion tokens. So Meta managed to get a model that's the same size, but But it performs so much [00:20:00] better by choosing to like spend the sacrifice during training, because as we know, training is a one time cost, but inference is something that's ongoing.[00:20:08] Speaker: If we want to see what are like the small models reads in 2024, I think this mobile LLM paper by Meta is interesting. They try to study different models that are like have the less than 1 billion parameters and find which architecture makes most sense for these models. For example, they find that depth is more important than width.[00:20:29] Speaker: So it's more important to have models that have like more layers than just one. making them more wide. They also find that GQA helps, that tying the embedding helps. So I think it's a nice study overall for models that are just a few hundred million parameters. There's also the Apple intelligence tech report, which is interesting.[00:20:48] Speaker: So for Apple intelligence, they had two models, one that was like on server and another model that was on device. It had 3 billion parameters. And I think the interesting part is that they trained this model using [00:21:00] pruning. And then distillation. And for example, they have this table where they show that, like, using pruning and distillation works much better than training from scratch.[00:21:08] Speaker: And they also have some interesting insights about, like, how they specialize their models on specific tasks, like, for example, summarization and rewriting. There's also this paper by NVIDIA that was released recently. I think you've already had a talk about, like, hybrid models that was all interesting.[00:21:23] Speaker: And this model, they used, like, a hybrid architecture between state space models and transformers. And they managed to train a 1B model that's really performant without needing to train it on a lot of tokens. And regarding our work, we just recently released SmallM2, so it's a series of three models, which are the best in class in each model size.[00:21:46] Speaker: For example, our 1. 7b model outperforms Lama 1b and also Qt 2. 5. And how we managed to train this model is the following. That's where you spent a lot of time trying to curate the pre training datasets. We did a lot of [00:22:00] ablations, trying to find which datasets are good and also how to mix them. We also created some new math and code datasets that we're releasing soon.[00:22:08] Speaker: But you basically really spent a lot of time trying to find what's the best mixture that you can train these models on. And then we spent some time trying to like we also trained these models for very long. For example, small M1 was trained only on 1 trillion tokens, but this model is trained on 11 trillion tokens.[00:22:24] Speaker: And we saw that the performance kept improving. The models didn't really plateau mid training, which I think is really interesting. It shows that you can train such small models for very long and keep getting performance gains. What's interesting about SmallLM2 is that it's fully open. We also released, like the pre training code base, the fine tuning code, the datasets, and also evaluation in this repository.[00:22:45] Smol Vision Models[00:22:45] Speaker: Also there's, like, really interesting small models for text, but also for vision. For example, here you can see SmallVLM, which is a 2B model that's really efficient. It doesn't consume a lot of RAM, and it also has a good performance. There's also Moondream 0. [00:23:00] 5b, which was released recently. It's like the smallest visual language model.[00:23:04] Speaker: And as you can see, there isn't like a big trade off compared to Moondream 2b. So now I showed you that we have some really good small models. We also have the tools to use them, but why should you consider using small models and when? I think, like, small models are really interesting because of the on device feature.[00:23:23] Speaker: Because these models are small and they can run fast, you can basically run them on your laptop, but also on your mobile phone. And this means that your dataset stays locally. You don't have to send your queries to third parties. And this really enhances privacy. That was, for example, one of the big selling points for Apple Intelligence.[00:23:42] Speaker: Also, right now, we really have a lot of work to do. So many frameworks to do on device inference. For example, there's MLX, MLC, Llama, CPP, Transformers, JS. So we have a lot of options and each of them have like great features. So you have so many options for doing that. Small models are also really powerful if you choose to specialize them.[00:24:00][00:24:00] Speaker: For example, here there's a startup called Numind, which took small LM and then they fine tuned it on text extraction datasets. And they managed to get a model that's not very far from models that are much larger. So I think text extraction is like one use case where small models can be really performant and it makes sense to use them instead of just using larger models.[00:24:19] Speaker: You can also chat with these models in browser. For example, here, you can go there, you can load the model, you can even turn off your internet and just start chatting with the model locally. Speaking of text extraction, if you don't want to fine tune the models, there's a really good method of structure generation.[00:24:36] Speaker: We can basically force the models to follow a JSON schema that you defined. For example, here, we try to force the model to follow a schema for extracting key information from GitHub issues. So you can input free text, which is a complaint about a GitHub repository, something not working. And then you can run it there and the model can extract anything that is relevant for your GitHub issue creation.[00:24:58] Speaker: For example, the [00:25:00] priority, for example, here, priority is high, the type of the issue bug, and then a title and the estimation of how long this will take to fix. And you can just like do this in the browser, you can transform your text into a GitHub issue that's properly formatted.[00:25:14] What's Next[00:25:14] Speaker: So what's next for synthetic data and small models?[00:25:18] Speaker: I think that domain specific synthetic data is going to be, it's already important, it's going to be even more important. For example, generating synthetic data for math. I think this really would help improve the reasoning of a lot of models. And a lot of people are doing it, for example, Quint 2. 12 math, everyone's trying to reproduce a one.[00:25:37] Speaker: And so I think for synthetic data, trying to specialize it on some domains is going to be really important. And then for small models, I think specializing them through fine tuning, it's also going to be really important because I think a lot of companies are just trying to use these large models because they are better.[00:25:53] Speaker: But on some tasks, I think you can already get decent performance with small models. So you don't need to Pay like a [00:26:00] cost that's much larger just to make your model better at your task by a few percent. And this is not just for text. And I think it also applies for other modalities like vision and audio.[00:26:11] Speaker: And I think you should also watch out for on device frameworks and applications. For example, like the app I showed, or lama, all these frameworks are becoming really popular and I'm pretty sure that we're gonna get like more of them in 2025. And users really like that. Maybe for other, I should also say hot take.[00:26:28] Speaker: I think that like in AI, we just started like with fine tuning, for example, trying to make BERT work on some specific use cases, and really struggling to do that. And then we had some models that are much larger. So we just switched to like prompt engineering to get the models And I think we're going back to fine tuning where we realize these models are really costly.[00:26:47] Speaker: It's better to use just a small model or try to specialize it. So I think it's a little bit of a cycle and we're going to start to see like more fine tuning and less of just like a prompt engineering the models. So that was my talk. Thank you for following. And if you have [00:27:00] any questions, we can take them now. Get full access to Latent Space at www.latent.space/subscribe
We start off this podcast with Jerry's chance encounter with Bill Belichick on 6th Ave, we review the loss to the Saints, preview the Ravens matchup, and make our picks for Week 15. Hang in, folks.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Béatrice Brugère est notre invitée dans les podcasts de l'ISP. Béatrice Brugère, comme chacun le sait, vous êtes magistrate. Vous avez été substitut du Procureur à la Cour d'appel de Douai, Magistrat du siège au TGI Paris, au contentieux des JIRS, Vice-Procureur au TGI de Versailles et vous êtes désormais première vice-procureur au TJ de Paris. Vous avez été réélue récemment secrétaire générale du syndicat Unité-Magistrat. Votre parole compte et s'entend régulièrement dans les grands médias. Vous avez écrit en 2024, un ouvrage plébiscité « Justice : la colère qui monte. Plaidoyer pour une refondation », lequel a reçu le prix du livre politique du Barreau de Paris et le prix Edgar Faure. Béatrice Brugère, il est évident lorsqu'on lit votre livre que vous avez une vision pour la Justice. Dans ce podcast, vous allez nous expliquer quelle est cette vision ? Quelle est votre idée de la nécessaire refondation de la justice ?
The Giants lost yet another game to the Cowboys on Thanksgiving Day. We discuss that loss, preview the Saints game, and make our picks for Week 14. Hang in there, folks. Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Another week, another loss, Giants fans -- this time in embarrassing fashion at home to the Tampa Bay Buccaneers. Enjoy this therapy session where we pretty much cover everything regarding this franchise, preview the Cowboys matchup, and make our picks for Week 13. Hang in.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We hand out our midseason grades for MVP and most disappointing, discuss the move from Danny Dimes to Tommy Cutlets, and we make our picks for Week 12.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The Giants lose to the Panthers in Germany in absolutely horrific fashion. Just brutal honesty in this podcast, folks. Hang in.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
There's a lot to talk about in this one, folks. No moves at the trade deadline, we review the Commanders game, preview the Panthers game and make our picks for Week 10. Hang in.Download the Underdog fantasy app and sign up with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, Nebraska; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Welcome to ohmTown. The Non Sequitur News Show is held live via Twitch and Youtube every day. We, Mayor Watt and the AI that runs ohmTown, cover a selection of aggregated news articles and discuss them briefly with a perspective merging business, technology, and society. You can visit https://www.youtube.com/ohmtown for the complete history since 2022.Articles Discussed:No, You're a Disputanthttps://www.ohmtown.com/groups/roundersgear/f/d/arkansas-lottery-scratch-off-disputants-ordered-to-split-500k-prize/Steams Built in Recordinghttps://www.ohmtown.com/groups/nonsequiturnews/f/d/steams-built-in-game-recording-is-now-available-to-all/TGI isn't being spent.https://www.ohmtown.com/groups/mobble/f/d/bankrupt-tgi-fridays-has-50-million-in-unused-gift-cards/Oh, that Hertz.https://www.ohmtown.com/groups/four-wheel-tech/f/d/hertz-apologizes-for-threatening-to-have-customer-who-drove-25000-miles-in-rental-car-arrested/Just a Raccoon trying to catch a flight.https://www.ohmtown.com/groups/four-wheel-tech/f/d/raccoon-jumps-the-check-in-line-at-laguardia/Citizen Chemists, What could go Wronghttps://www.ohmtown.com/groups/greenagram/f/d/citizen-scientists-can-be-chemists-give-them-a-chance/Trucker Shortage in Japanhttps://www.ohmtown.com/groups/nonsequiturnews/f/d/japans-intense-trucker-shortage-may-inspire-a-drastic-solution-a-giant-conveyer-belt-between-cities/Nintendo says Switch 2 will be compatible.https://www.ohmtown.com/groups/nonsequiturnews/f/d/nintendo-says-its-switch-successor-will-be-backward-compatible-with-switch-games/ER Reboot is totally different.https://www.ohmtown.com/groups/the-continuity-report/f/d/warner-bros-fires-back-at-crichton-estate-over-claim-the-pitt-is-an-er-reboot-its-a-completely-different-show/Jarvis AI can take over computershttps://www.ohmtown.com/groups/technologytoday/f/d/google-accidentally-leaked-a-preview-of-its-jarvis-ai-that-can-take-over-computers/
We recap the disaster against Philly and preview the Monday Night matchup with the Steelers. We also take your questions and make our picks for Week 8.Sign up and deposit for Underdog HERE with promo code TGI to get up to $1,000 in bonus cash and a free pick: underdogfantasy.com or download the app.Must be 18+ (19+ AL, NE; 19+ in CO for some games, 21+MA & AZ) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369).Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We preview the pivotal matchup against the Eagles and make our picks for Week 7. Chris discusses a couple conversations from the locker room while Jerry complains about stadium and airplane etiquette....Go to UnderdogFantasy.com, sign up with promo code TGI, and Underdog will give you a FREE PICK to use on your first cash Pick'em entry PLUS up to $1,000 in bonus cash when you deposit.Must be 18+ (21+MA & AZ, 19+ AL, NE) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369)Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The Giants had a chance to keep pace in the NFC East but once again lost a heartbreaker in prime time. The defense did their part but the offense couldn't move the ball. Yes, we focus on the quarterback.Shout out to Big Blue BBQ for their incredible tailgate.Go to UnderdogFantasy.com, sign up with promo code TGI, and Underdog will give you a FREE PICK to use on your first cash Pick'em entry PLUS up to $1,000 in bonus cash when you deposit.Must be 18+ (21+MA & AZ, 19+ AL, NE) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369)Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
It's our first victory podcast of the year, folks. The Giants upset the Cleveland Browns by a score of 21 - 15. We recap the good (it's mostly good) as well as the bad. Enjoy!Go to UnderdogFantasy.com, sign up with promo code TGI, and Underdog will give you a FREE PICK to use on your first cash Pick'em entry PLUS up to $1,000 in bonus cash when you deposit.Must be 18+ (21+MA & AZ, 19+ AL, NE) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369)Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
We review the heartbreaker against Washington, we preview the matchup with Cleveland, and we make our picks for Week 3. Oh, and yes, some commentary on the NFC East. Enjoy!Go to UnderdogFantasy.com, sign up with promo code TGI, and Underdog will give you a FREE PICK to use on your first cash Pick'em entry PLUS up to $1,000 in bonus cash when you deposit.Must be 18+ (21+MA & AZ, 19+ AL, NE) and present in a state where Underdog Fantasy operates. Terms apply. Void in CO. Concerned with your play? Call 1-800-GAMBLER or visit www.ncpgambling.org; AZ: 1-800-NEXT-STEP (1-800-639-8783) or text NEXT-STEP to 53342; NY: Call the 24/7 HOPEline at 1-877-8-HOPENY or Text HOPENY (467369)Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
The incarceration of transgender, gender nonconforming, and intersex (TGI) people often leads to heightened discrimination and violence within prison walls. Miss Major Alexander L. Lee TGIJP Black Trans Cultural Center is standing up for these individuals, offering legal and emotional support while pushing for systemic change. Discover how they are working to defend dignity and transform the future for marginalized communities. Want to support Miss Major Alexander L. Lee TGIJP Black Trans Cultural Center? https://tgijp.org/ Find this episode at: https://great.com/great-talks-with/miss-major-alexander-l-lee-tgijp-black-trans-cultural-center/
O conflito entre Estados árabes e Israel que definiu boa parte dos problemas políticos no Oriente Médio. Foram seis dias mas pareceram pelo menos seis anos! Separe trinta minutos do seu dia e aprenda com o professor Vítor Soares (@profvitorsoares) sobre o que foi a Guerra dos Seis Dias. - Se você quiser ter acesso a episódios exclusivos e quiser ajudar o História em Meia Hora a continuar de pé, clique no link: www.apoia.se/historiaemmeiahora Compre o livro "História em Meia Hora - Grandes Civilizações"! https://www.loja.literatour.com.br/produto/pre-venda-livro-historia-em-meia-hora-grandes-civilizacoesversao-capa-dura/ Compre meu primeiro livro-jogo de história do Brasil "O Porão": https://amzn.to/4a4HCO8 Compre nossas camisas, moletons e muito mais coisas com temática História na Lolja! www.lolja.com.br/creators/historia-em-meia-hora/ PIX e contato: historiaemmeiahora@gmail.com Apresentação: Prof. Vítor Soares. Roteiro: Prof. Vítor Soares e Prof. Victor Alexandre (@profvictoralexandre) REFERÊNCIAS USADAS: - ARMSTRONG, Karen. Jerusalém: uma cidade, três religiões. São Paulo: Companhia das Letras, 2000. - CAMARGO, Cláudio. Guerras Árabe-israelenses. In.: MAGNOLI, Demétrio (org.). História das Guerras. São Paulo: Contexto, 2013. - NPR. Timeline: The Six Day War. Disponível em: https://www.npr.org/templates/story/story.php?storyId=10694216. - BBC Brasil. Os seis dias que já duram 50 anos: a guerra que mudou para sempre o Oriente Médio. Disponível em: https://www.bbc.com/portuguese/internacional-40200042. - TROY, G. (2018), The Zionist Ideas. Philadelphia: University of Nebraska Press - SANTOS, Claudio Roberto dos. Judeus contra Israel: uma análise crítica do sionismo. 2018. Trabalho de Conclusão de Curso (Graduação) – Faculdade de Filosofia, Letras e Ciências Humanas, Universidade de São Paulo, São Paulo, 2018. Disponível em: https://repositorio.usp.br/directbitstream/574a7296-fa43-40c3-a3a3-228543917353/2019_ClaudioRobertoDosSantos.TGI.pdf. Acesso em: 17 out. 2023. - SILVA, Daniel Neves. "Guerra dos Seis Dias"; Brasil Escola. Disponível em: https://brasilescola.uol.com.br/historiag/guerra-dos-seis-dias-poder-israelense.htm. Acesso em 25 de junho de 2024
2024 Legislative & Budget Priorities2024 #1 Priority Legislation AB 1955 (Ward, LGBTQ Caucus) – SAFETY ActThe Support Academic Futures & Educators for Today's Youth Act (SAFETY Act), would strengthen existing California protections against forced outings of LGBTQ+ students in schools; provide critical resources for parents and families of LGBTQ+ students to support them in working towards family acceptance on their own terms; and provide additional protections to educators who face retaliatory actions from administrators and school boards for seeking to create an inclusive and safe school environment. 2024 Priority “Sponsored” Legislation AB 1899 (Cervantes) – Gender-Inclusive Jury QuestionnairesThis bill requires Judicial Council to create a template juror questionnaire that is inclusive of gender expression and identity.AB 1979 (Ward) – Doxing Victims Recourse ActThis bill provides recourse for victims who have been harmed as a result of being doxed by allowing a victim to pursue civil action to receive restitution for the harms endured as a result of being doxed.AB 2258 (Zbur) – Protecting Access to Preventive ServicesThe bill codifies longstanding federal guidance that health plans and insurers must cover services that are integral to providing recommended preventive care – including anesthesia and polyp removal during a colonoscopy; placement, management, and removal of long-acting reversible contraceptives; and, ancillary and support services for PrEP including HIV and other STI screening – without cost sharing.AB 2442 (Zbur) – Expedited Medical Licensure for Gender-Affirming CareThis bill requires the expedited processing of licensure applications by the Medical Board of California, the Osteopathic Medical Board of California, the Board of Registered Nursing, the Physician Assistant Board, the Board of Behavioral Sciences, and the Board of Psychology for applicants demonstrating a commitment to providing gender-affirming health care or gender-affirming mental health care services within their licensed scope of practice.AB 2477 (Zbur) – Foster Care Cash SavingsThis bill permits youth transitioning to adulthood from foster care the chance to grow the best financial safety net possible by updating state law to clarify that young adults have the ability to accumulate cash savings while in foster care.AB 2498 (Zbur) – California Housing Security ActThis bill aims to prevent individuals from falling into homelessness by providing rent subsidies to a range of rent-burdened populations, including former foster youth, older adults, adults with disabilities, people experiencing unemployment or homelessness, and recently incarcerated people.AB 3031 (Lee and Low) – Statewide LGBTQ+ CommissionThis bill establishes a Statewide LGBTQ+ Commission to serve as a state-level focal point for identification of key issues for the Caucus to prioritize in the future.SB 11 (Menjivar) – California State University Mental Health [Two-Year Bill]This bill would require the CSU to decrease the ratio of students to mental health counselors to address increased student needs and work to create a pipeline for CSU students to become mental health professionals. Also, this bill would increase data collection on CSU's mental health services and student wellbeing.SB 729 (Menjivar) – Health Care Coverage for Infertility and Fertility Treatment [Two-Year Bill]This bill would expand access to fertility care for Californians, including coverage for in vitro fertilization (IVF). Also, this bill would revise the definition of infertility to ensure same-sex couples are covered by health care insurance and are treated without discrimination.SB 954 (Menjivar) – Youth Health Equity + Safety (YHES) Act This bill seeks to address the sexually transmitted infection (STI) epidemic among California youth and improve equitable public health outcomes statewide by expanding teen access to condoms in schools and communities.SB 957 (Wiener) – SOGI Data CollectionThis bill requires the California Department of Public Health (CDPH) to collect sexual orientation and gender identity (SOGI) data from third-party entities, including local health jurisdictions, on any forms or electronic data systems, unless prohibited by federal or state law. The bill also requires CDPH to provide an annual report to the public and to the Legislature on its efforts to collect, analyze, and report SOGI data.SB 959 (Menjivar) – TGI Resources WebsiteThis bill establishes an online resource for transgender, gender diverse, and intersex (TGI) people and their families to combat misinformation and provide accurate information about access to trans-inclusive health care, existing legal protections for patients and health care providers, and other available support services.SB 990 (Padilla) – LGBTQ+ Disaster Relief PlansThis bill requires Cal-OES to consult with LGBTQ+ organizations and advocates in the community when creating the State Disaster Plan.SB 1278 (Laird) – World AIDS DayThis bill enshrines December 1st as World AIDS Day, a day globally recognized in solidarity with people affected by HIV.SB 1333 (Eggman) – HIV Data SharingThis bill requires state and local health department employees and contractors to annually sign the agreement and would repeal the annual review of the agreements. Additionally, this bill authorizes disclosure to other local, state, or federal public health agencies or to medical researchers when confidential information is necessary for the coordination of, linkage to, or reengagement in care for the person.SB 1491 (Eggman) – LGBTQ+ Higher Education EquityThis bill, beginning with the 2026–27 school year, requires the Student Aid Commission to provide a written notice to students who receive state financial aid regarding whether their postsecondary educational institution has an exemption from either the Equity in Higher Education Act or Title IX on file with the commission. 2024 Endorsed “Supported” Legislation AB 1810 (Bryan) – Incarcerated Peoples' Menstrual ProductsCaucus Co-Author: Assemblymember Zbur This bill ensures that any incarcerated person and/or youth who menstruates or experiences uterine or vaginal bleeding has ready access to, is allowed to use, and continues to use materials necessary for personal hygiene without having to request them.AB 1825 (Muratsuchi) – The California Freedom to Read ActCaucus Principal Co-Author: Assemblymember Ward This bill prohibits public libraries from banning books based on partisan or political reasons, view point discrimination, gender, sexual identity, religion, disability, or on the basis that the books contain inclusive and diverse perspectives.AB 3161 (Bonta) – Equity in Health Care Act: Ensuring Safety and AccountabilityCaucus Co-Author: Assemblymember Jackson This bill requires hospitals to analyze patient safety events by sociodemographic factors, like race, ethnicity, language, sexual orientation, and disability status. This will allow us to see the disparities in health that communities of color and LGBTQ communities are facing. Additionally, AB 3161 requires hospital safety plans to include a process for addressing racism and discrimination and its impacts on patient health and safety.SB 1022 (Skinner) – Defending Housing, Employment, and Other Civil Rights ViolationsCaucus Co-Author: Senator Wiener This bill empowers the Civil Rights Department (CRD) to stop systemic workplace discrimination by doing the following: (1) Clarify that deadlines that apply to individual complaints do not apply to complaints initiated by CRD or to group/class claims being prosecuted by CRD; (2) Allow CRD to rectify longrunning civil rights violations for the benefit of all victims, not only recent victims; (3) Allow CRD to pause investigations when the parties agree; and, (4) Allow housing discrimination cases to be brought in any county where CRD has an office. May Revise Budget Priorities Preserve all funding for the LBTQ Women's Health Equity Initiative Fund within CDPH Office of Health Equity's Gender Health Equity Section by authorizing existing funds to transfer from FY23/24 to FY24/25.Reject proposed cuts to the CYBHI – Public Education and Change Campaign funding within CDPH Office of Health Equity to ensure LGBTQ+ preventive mental health programs are prioritized including local LGBTQ organizations and the statewide LGBTQ campaign, and replace proposed cuts with a more equitable level of funding reduction.Reject proposed cuts for “The Future of Public Health” initiative at CDPH Office of Health Equity to ensure LGBTQ community services within local health departments are supported for sexual health and harm reduction programs.Support requested expenditure authority of $725,000 with Department of Health Care Services (DHCS) to support addition of intersexuality to voluntary self-identification information to be collected by state departments and entities, pursuant to the requirements of AB 1163 (Lesbian, Gay, Bisexual, and Transgender Disparities Reduction Act).Support requested expenditure authority of $710,000 with Department of Public Health (CDPH) to implement system changes to collect voluntary self identification information pertaining to intersexuality in the course of collecting demographic data, pursuant to the requirements requirements of AB 1163 (Lesbian, Gay, Bisexual, and Transgender Disparities Reduction Act).Support requested expenditure authority of $718,000 with Health Care Access and Information (HCAI) to to support implementation of required planning by hospitals for increasing the diversity of procured vendors, pursuant to the requirements of AB 1392 (Rodriguez), Chapter 840, Statutes of 2023. Priority Budget Requests (In Alphabetical Order) ADAP Rebate Fund Loan Reduction & Modernizations – This budget request reduces the Governor's proposed $500 million loan from the AIDS Drug Assistance Program (ADAP) Rebate Fund to the General Fund (GF) to $250 million, of which $5 million of the loaned ADAP-to-GF must go towards SB 954 (Menjivar, 2024), the YHES Act. Additionally, this budget request seeks the following modernizations to ADAP: (1) ADAP and PrEP-AP eligibility increase from 500% Federal Poverty Level (FPL) to 600% FPL – $3.5 million (one-time); (2) Harm Reduction Clearinghouse Increase: $10 million (one-time); (3) Health Insurance Premium Payment Cap on Premium Payments Lift: $3.5 million (one-time) & $7 million (ongoing); (4) TGI Wellness and Equity Fund: $5 million (ongoing); and, (5) Needs assessments and analyses for both gap identification of client navigation and retention services, as well as PrEP Navigation Program: $400 thousand (onetime).California Coalition of Transgender Immigrants – This budget request seeks $250,000 in funding to be divided into three programs to help bring equity, justice, and inclusion for Transgender, Gender NonConforming, and Intersex (TGI) immigrants: (1) Trans Immigrant Asylee program – $150,000; (2) Trans Inter-Sectional Unity program – $50,000; and, (3) Trans Emerging Leadership and Artist program – $50,000.Raise-A-Child Foster Family Recruitment & Retention Expansion – This budget request seeks $1 million in funding to accelerate the expansion of Raise-A-Child services throughout California to go towards: (1) Recruitment Promotion Campaigns; (2) Community Events and Engagement; (3) Virtual Information and Orientation Sessions; and, (4) Technical Assistance and Support.Renewal of Preservation of LGBTQ+ History Program Historical Archives – This budget request seeks to renew previously allocated funding for the “Preservation and Accessibility of California's LGBTQ+ History Program,” which is a competitive grant program that is administered by the California State Library. This program supports LGBTQ+ archives of all sizes for projects that work to preserve and make publicly accessible collections relevant to the LGBTQ+ movement, culture, experience, and/or history in California, as well as provides vital information services, including research opportunities, youth engagement, and academic enrichment. Specifically, this San Francisco Harvey Milk Plaza ADA Updates – This budget request seeks to invest $5 million in funding to be used towards the installation of a new ADA-compliant main stair and a new escalator to access the entrance to the Castro Muni Station for Harvey Milk Plaza. AB 1955 (Ward, LGBTQ Caucus) – SAFETY Act
Nadine Alameh, Executive Director Taylor Geospatial Institute joins Megan Lynch to preview TGI's Inaugural Town Hall Event on May 22
Ask anyone in business long enough to find success and they'll likely tell you that relationships got them there. Cosmo Tires, operated by Tire Group International, is no different and is continuing to lean on its uncanny ability to cultivate strong, family-first partnerships as a cornerstone for growth."With everything that we do as a brand, if it's not all authentic and we're not able to reach the consumers where they interact with brands in the marketplace, then we lose some of that connectivity. It can get filtered, and we want to make sure that we capture everything from the street, take it all the way into the organization, and feed it directly into product development," says Dominick Montouri, the chief strategy officer for Tire Group International. "We want to make sure we keep that chain very, very short, but also keep it authentic and you don't lose any of those key attributes about what really matters to consumers in our space."In this episode of What's Treading with Tire Review, Montouri shares insights into how Cosmo Tires plans to expand its footprint and its engagement with consumers and partners, ensuring that the brand's evolution is just as much about meaningful connections and solutions in the tire industry as it is growth.Want more What's Treading? Click here.Tire Review: www.tirereview.comAAPEX: www.aapexshow.com
Marge Cole, Director of Consortium Engagement at TGI joins Megan Lynch discussing how AI could boost the work being done on geospatial science in the St. Louis region.
TGI(good)F, bro! Let's get it!TODAY ON THE SHOW:EASTER.EGG.ROULETTE is back!Spring Break BUST!What movie did you WALK OUT of???Who is the SMARTEST in the ROOM?!?+ SOmuchMORE!!
00:00:00 Intro - LCS and TGI updates 00:15:55 treethan's take: we should do team tryouts in NA like they do in KR 00:30:40 play's take: team owners should be livid about Riot's broadcast shortcomings 00:57:09 franz's take: if TL doesn't start picking up wins, they should drop CoreJJ not APA 01:13:12 brandywine's take: the PROS show is the best thing to happen for the scene in years 01:30:10 skizzle's take: SR's failure to keep Chime and pick up a good top laner is the cause of their poor performance and low placement 01:47:55 aemulator's take: C9 are infodoomed like no other team in the West 01:58:14 Outro
00:00:00 Intro 00:16:33 outsane asks Mark why he's the man for the job 00:31:31 fearfactor's take: better comms are necessary for the LCS to improve 00:39:12 farmerginge asks Mark what viewers should keep in mind while they wait for implementation 00:50:22 Avajou asks Mark for his favorite HLL moments 01:03:05 Andrew asks if TGI can be unbiased? 01:11:59 DoubleG's take: MarkZ IS the man for the job 01:19:45 Cody's take: the LCS might struggle to fill Mark's shoes on the broadcast at first 01:29:50 praeco asks if there are differences between esports and tsports that can be leaned into 01:45:45 big angry hobo wonders if it's too late for 2024 LCS changes 01:52:30 CaptFlowers calls in to congratulate Mark 01:59:00 Azael and Emily join HLL 02:10:20 rudy asks Mark what he's excited to see evolve about the product moving forward 02:20:07 Outro