POPULARITY
Is Shedeur Sanders getting bad advice? I Take 5 Wednesday: Top WR Duos
Les éditions Grasset viennent de sortir une nouvelle collection consacrée aux contes des régions de France... Le monde regorge de contes et de légendes ! Mais tous ne commencent pas par « Il était une fois… », ils n'ont pas toujours une morale et ne finissent pas toujours non plus par un mariage de princesse ! La France n'est pas en reste, chaque région a ses croyances, ses héros imaginairesVous êtes-vous déjà demandé comment était née la Loire ? Quel était le dernier chant du Roi Arthur ? Et avez-vous déjà entendu parler de Jean de l'Ours ? Les régions et les territoires nous racontent beaucoup sur ces mémoires oubliées, en sommeil. Avec quatre premiers volumes. Les éditions Grasset viennent de publier quatre recueils de contes de différentes régions de France. Chaque auteur sélectionné a une attache particulière avec la région pour laquelle il a écrit. Denis Gombert est parti du constat qu'un conte se "re-raconte" et que c'est même sa condition de survivance. C'est par amour du conte, mais aussi par amour du patrimoine littéraire que nous avons imaginé cette collection. Denis Gombert Les lieux de mémoires du conte ont d'abord été identifiés. L'idée est née durant le confinement de 2020, où de nombreuses personnes ont souvent rejoint leur région d'origine. J'aime le lien entre tous les contes, il y a un lien universel, ce sont des textes immémoriaux qui font puiser dans des racines symboliques dans un rapport à la planète Terre, à la nature. Agnès Michaux Pour travailler sur son recueil, Agnès Michaux a travaillé à partir de corpus de la BNF où elle a trouvé des documents datant du 19e siècle, notamment la revue des Traditions Populaires dans laquelle on trouve des versions très brutes des contes. Les contes ont dont dû être retravaillés pour l'écrit pour les rendre lisibles.Invités : Denis Gombert, ancien professeur, il est aujourd'hui auteur et éditeur, directeur de la collection « La France par ses contes » et Agnès Michaux autrice et traductrice autrice des « Contes de la Loire », un des premiers volumes à être publié aux éditions Grasset. Les trois autres volumes disponibles : Contes des Pyrénées, Contes de Normandie, contes d'outre-mer. Et comme chaque semaine, retrouvez la chronique de Lucie Bouteloup « La puce à l'oreille » en partenariat avec les éditions Le Robert à retrouver sur le blog Dis-moi Robert avec le lexicographe Benjamin Rouxel et la complicité des élèves de l'École Léon-Maurice Nordmann à Paris. Aujourd'hui, on décrypte l'expression "Être un fayot".Programmation musicale :L'artiste Cerrone featuring Laylow avec le titre Experience disco symphony
Les éditions Grasset viennent de sortir une nouvelle collection consacrée aux contes des régions de France... Le monde regorge de contes et de légendes ! Mais tous ne commencent pas tous par « Il était une fois… » n'ont pas toujours une morale et ne finissent pas toujours avec un mariage de princesse ! La France n'est pas en reste, chaque région a ses croyances, ses héros imaginairesVous êtes-vous déjà demandé comment était née la Loire ? Quel était le dernier chant du Roi Arthur ? Et avez-vous déjà entendu parler de Jean de l'Ours ? Les régions et les territoires nous racontent beaucoup sur ces mémoires oubliées, en sommeil. Avec quatre premiers volumes. Les éditions Grasset viennent de publier quatre recueils de contes de différentes régions de France. Chaque auteur sélectionné a une attache particulière avec la région pour laquelle il a écrit. Denis Gombert est parti du constat qu'un conte se "re-raconte" et que c'est même sa condition de survivance. C'est par amour du conte, mais aussi par amour du patrimoine littéraire que nous avons imaginé cette collection. Denis Gombert Les lieux de mémoires du conte ont d'abord été identifiés. L'idée est née durant le confinement de 2020, où de nombreuses personnes ont souvent rejoint leur région d'origine. J'aime le lien entre tous les contes, il y a un lien universel, ce sont des textes immémoriaux qui font puiser dans des racines symboliques dans un rapport à la planète Terre, à la nature. Agnès Michaux Pour travailler sur son recueil, Agnès Michaux a travaillé à partir de corpus de la BNF où elle a trouvé des documents datant du 19e siècle, notamment la revue des Traditions Populaires dans laquelle on trouve des versions très brutes des contes. Les contes ont dont dû être retravaillés pour l'écrit pour les rendre lisibles.Invités : Denis Gombert, ancien professeur, il est aujourd'hui auteur et éditeur, directeur de la collection « La France par ses contes » et Agnès Michaux autrice et traductrice autrice des « Contes de la Loire », un des premiers volumes à être publié aux éditions Grasset. Les trois autres volumes disponibles : Contes des Pyrénées, Contes de Normandie, contes d'outre-mer. Et comme chaque semaine, retrouvez la chronique de Lucie Bouteloup « La puce à l'oreille » en partenariat avec les éditions Le Robert à retrouver sur le blog Dis-moi Robert avec le lexicographe Benjamin Rouxel et la complicité des élèves de l'École Léon-Maurice Nordmann à Paris. Aujourd'hui, on décrypte l'expression "Être un fayot".Programmation musicale :L'artiste Cerrone featuring Laylow avec le titre Experience disco symphony
Dominante reggaeton : shatta et frafra pour Sophian et retour aux sources turques pour Selman. L'émission ouvre avec la playlist mensuelle de Sophian Fanen. Blaiz Fayah, Maureen et DJ Glad, « Money Pull Up », tiré de l'album Shatta Ting (Creepy Music, 2025) Florence Adooni, « Mam pe'ela su'ure », tiré de l'album A.O.E.I.U (Philophon, 2025) Greentea Peng, « One Foot », tiré de l'album Tell Dem it's Sunny (Greentea Peng/Awal, 2025) Verito Asprilla, « Yop soy quien la rompe » (single, Llorona Records, 2025) Gal Costa, « O dengo que a nega tem », tiré du deux-titres Compacto de 1972 (Universal Music, 2025)Puis, nous recevons Selman Faris pour l'album Kaplan, voyage vers l'Est.Après un premier EP (Neva) salué pour son groove méditerranéen singulier, le producteur et multi-instrumentiste Selman Faris présente un premier album pensé comme un grand voyage musical inspiré par l'Asie, de la Turquie au Japon, entre héritage et modernité. Le fils du flûtiste turc Kudsi Erguner, collaborateur de Nekfeu, Alpha Wann, Laylow ou encore dernièrement Aya Nakamura, Selman s'est imposé comme un compositeur et musicien incontournable de la scène française. Avec Kaplan, voyage vers l'Est (« Kaplan » signifiant « tigre » en turc), il élargit son horizon et tisse un récit sonore où les instruments traditionnels croisent des textures jazz, ambient et funk. Une œuvre ambitieuse, à l'image du tigre dont il suit les traces.Titres diffusés extraits de l'album : « Ubud », « Ispahan », « Sehir Uzakta » et « Moshi Moshi ».► Album Kaplan, voyage vers l'Est (Kiraz/Roche Musique)YouTube - Instagram
Dominante reggaeton : shatta et frafra pour Sophian et retour aux sources turques pour Selman. L'émission ouvre avec la playlist mensuelle de Sophian Fanen. Blaiz Fayah, Maureen et DJ Glad, « Money Pull Up », tiré de l'album Shatta Ting (Creepy Music, 2025) Florence Adooni, « Mam pe'ela su'ure », tiré de l'album A.O.E.I.U (Philophon, 2025) Greentea Peng, « One Foot », tiré de l'album Tell Dem it's Sunny (Greentea Peng/Awal, 2025) Verito Asprilla, « Yop soy quien la rompe » (single, Llorona Records, 2025) Gal Costa, « O dengo que a nega tem », tiré du deux-titres Compacto de 1972 (Universal Music, 2025)Puis, nous recevons Selman Faris pour l'album Kaplan, voyage vers l'Est.Après un premier EP (Neva) salué pour son groove méditerranéen singulier, le producteur et multi-instrumentiste Selman Faris présente un premier album pensé comme un grand voyage musical inspiré par l'Asie, de la Turquie au Japon, entre héritage et modernité. Le fils du flûtiste turc Kudsi Erguner, collaborateur de Nekfeu, Alpha Wann, Laylow ou encore dernièrement Aya Nakamura, Selman s'est imposé comme un compositeur et musicien incontournable de la scène française. Avec Kaplan, voyage vers l'Est (« Kaplan » signifiant « tigre » en turc), il élargit son horizon et tisse un récit sonore où les instruments traditionnels croisent des textures jazz, ambient et funk. Une œuvre ambitieuse, à l'image du tigre dont il suit les traces.Titres diffusés extraits de l'album : « Ubud », « Ispahan », « Sehir Uzakta » et « Moshi Moshi ».► Album Kaplan, voyage vers l'Est (Kiraz/Roche Musique)YouTube - Instagram
Steak and Sandra come back with some brief NFL and Falcons talk briefly discussing the recent social media post by Kirk Cousins as Steak states he believes the quarterback needs to play the background a little more and let things play out and allow the Falcons to find the proper trade. They then share some thoughts on headlines out of NFL owners' meetings including Mike Tomlin speaking on the Steelers and Aaron Rogers potential signing.
Tiësto - Ultra Music Festival, Miami, 2025 (Day 1) 01. Queen vs. Lil Jon vs. Firebeatz - Bohemian Rhapsody vs. Welcome To The House Tiësto vs. Lose My Sh!t (Tiësto Mashup) 02. Queen - Bohemian Rhapsody 03. Firebeatz - Lose My Sh!t _Tiësto & KSHMR feat. VASSY - Secrets 04. Tiësto & Poppy Baskcomb - Maximal Crazy vs. Drifting vs. WOW (Tiësto Mashup) 05. Tiësto & Poppy Baskcomb - Drifting 06. Tiësto - Maximal Crazy 07. Tiësto - WOW 08. Tiësto & The Chainsmokers - Lay Low vs. Split (Only U) (Tiësto Mashup) 09. Tiësto & The Chainsmokers - Split (Only U) 10. Tiësto - Lay Low 11. Tiësto & Tate McRae vs. Martin Garrix - 10:35 vs. The Only Way Is Up (VIP) vs. Grapevine (Tiësto Mashup) 12. Martin Garrix & Tiësto - The Only Way Is Up (VIP) 13. Tiësto - Grapevine 14. Tiësto ft. Tate McRae - 10:35 15. Tiësto & Sevenn - BOOM 16. Tiësto & Charli xcx - Hot In It (Tiësto Hotter Mix) 17. Tiësto & Diplo vs. Ava Max - C'mon vs. The Motto (Tiësto Mashup) 18. Tiësto & Diplo - C'mon 19. Tiësto & Ava Max - The Motto 20. Tiësto vs. Da Hool vs. Green Velvet & Harvard Bass - Meet Her vs. Lazer Beams (Tiësto Mashup) 21. Green Velvet & Harvard Bass - Lazer Beams (Acappella) 22. Tiësto vs. Da Hool - Meet Her 23. Da Hool - Meet Her At The Love Parade 24. Zerb & Sofiya Nzau - Mwaki (Tiësto VIP Mix) 25. Bad Bunny - DtMF (Tiësto Remix) 26. Equinøx - Conmigo 27. ID - Comprendo 28. Odd Mob & GOODBOYS - ID 29. Noir & Haze - Around (Cyava Edit) 30. Tiësto - ID 31. ID - ID 32. Tiësto & Black Eyed Peas - Pump It Louder 33. Lola Young - Messy (Łaszewo Remix) 34. Cygnus X & Ummet Ozcan vs. Rozalla vs. Odd Mob & OMNOM - Free Control (Tiësto Edit) 35. Rozalla - Everybody's Free (To Feel Good) 36. Cygnus X - Superstring (Ummet Ozcan Remix) 37. Odd Mob & OMNOM - Losing Control 38. ID - ID 39. Moonphazes & DAMEN - Grapevine 40. A$AP Ferg - Work (Proppa & Rich DietZ & Smith & Sorren Remix) 41. Tiësto & SIDEPIECE - Raven (SIDEPIECE Treat) 42. Tiësto - Lethal Industry (Rose Ringed Remix) 43. Swedish House Mafia ft. Pharrell Williams - One (Your Name) 44. ID - ID 45. RÜFÜS DU SOL - Innerbloom (RUMPUS Edit) 46. Notre Dame - Yumi (Tiësto Remix) 47. Hardwell vs. Space Laces & VENG - Spaceman vs. Dominate (Tiësto Mashup) 48. Space Laces - Dominate (VENG Flip) 49. Hardwell vs. Space Laces - Spaceman vs. Dominate (Tiësto Mashup) 50. Axel Boy - Get Ruff 51. Tiësto & James Bell vs. RL Grime & Knock2 & Abi Flynn - The Business vs. come aliv3 (Tiësto Mashup) 52. Tiësto ft. James Bell - The Business 53. RL Grime & Knock2 ft. Abi Flynn - come aliv3 54. Onlynumbers ft. Lucie Hart - Euphoric Night 55. Clean Bandit & Tiësto ft. Leony - Tell Me Where You Go 56. Gwen Stefani vs. Teriyaki Boyz - Hollaback Girl vs. Tokyo Drift (Macon 148 BPM Remix) 57. Teriyaki Boyz - Tokyo Drift (Fast & Furious OST) 58. Gwen Stefani - Hollaback Girl 59. Tiësto - ID 60. Tiësto - Adagio For Strings 61. Dimitri Vegas & Like Mike & Tiësto & W&W ft. Dido - Thank You (Not So Bad)
Just days after her Breitbart interview raised the ire of Pierre Poilievre's campaign team, Alberta Premier Danielle Smith is headed to Florida to co-host a conservative fundraiser with podcaster Ben Shapiro. Smith says the appearance is a continuation of her efforts to influence U.S. foreign policy on trade in favour of Canada. Her critics say she's doing more harm than good. 2:50 | Rob Breakenridge says Danielle Smith should "lay low for a while". We talk to the longtime conservative commentator about his feature in The Line. READ ROB'S PIECE: https://www.readtheline.ca/p/rob-breakenridge-danielle-smith-might 47:15 | Jespo and Johnny roll out a cool new way to support the show on YouTube. Shout out to our newest channel members! JOIN JESPO AT THE ICCHANGE GALA on APRIL 12: https://www.icchange.ca/2025gala 1:07:00 | Did you see the Marjorie Taylor Greene blowup? Thoughts? GET TICKETS for EDIFY'S BEST RESTAURANTS EVENT on APRIL 7: https://tickets.edifyedmonton.com/best-restaurants-2025/ 1:28:30 | Are you sober or sober curious? 1:34:00 | Alberta NDP MLA Rakhi Pancholi apologizes after reportedly calling UCP Minister Jason Nixon "a colossal piece of sh*t". We take a look at what Real Talkers are saying on our Live Chat powered by Park Power. PAY LESS FOR INTERNET, ELECTRICITY, and NATURAL GAS: https://parkpower.ca/realtalk/ KNOCK 50% OFF an annual subscription to Alberta Views with the promo code AVRJ: https://albertaviews.ca/ REGISTER FOR THE REAL TALK GOLF CLASSIC: https://www.ryanjespersen.com/real-ta... FOLLOW US ON TIKTOK, X, INSTAGRAM, and LINKEDIN: @realtalkrj & @ryanjespersen JOIN US ON FACEBOOK: @ryanjespersen REAL TALK MERCH: https://ryanjespersen.com/merch RECEIVE EXCLUSIVE PERKS - BECOME A REAL TALK PATRON: patreon.com/ryanjespersen THANK YOU FOR SUPPORTING OUR SPONSORS! https://ryanjespersen.com/sponsors The views and opinions expressed in this show are those of the host and guests and do not necessarily reflect the position of Relay Communications Group Inc. or any affiliates.
P.M. Edition for Mar. 24. As President Trump ramps up his attacks on the legal industry, law firms are split on how to respond. WSJ national legal-affairs reporter Erin Mulvaney discusses the implications for the industry. Plus, Trump recalibrates his plans for tariffs on goods from particular sectors and says he might soften reciprocal tariffs on some nations, though the back-and-forth is hard on U.S. small businesses. Senior special writer Ruth Simon joins to talk about how small businesses are responding. And shares in Tesla, a longtime stock-market highflier, are down more than 30% this year. Reporter Hannah Erin Lang explains why. Alex Ossola hosts. Sign up for the WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Það er Grænlandsþema þessa vikuna á Rás 1, á mánudaginn heyrðum við í þættinum viðtal við Jósep og Skúla um Kalak, vinfélags Íslands og Grænlands og í dag kom Ragnar Axelsson ljósmyndari til okkar. Ragnar, eða RAX, hefur ferðast um Grænland í um 40 ár og hefur á þeim árum og tekið magnaðar ljósmyndir og safnað sögubrotum af harðri lífsbaráttu Grænlendinga í sambýli við besta vin mannsins, sleðahundinn. Við fengum Ragnar til að segja okkur frá Grænlandi, þessu stórbrotna landi og því fólki sem hann hefur kynnst þar og þeim breytingum sem hann hefur orðið vitni að á þessum fjórum áratugum. Við fengum svo póstkort frá Magnúsi R. Einarssyni. Kort dagsins kemur frá Eyjum og segir frá endurminningum Magnúsar um Bítlana og Liverpool á meðan hann var úti í Berlín í liðinni viku. Hann rifjar upp ferð sína á tónleika með Paul McCartney í Liverpool og segir hvernig borgin hefur breyst á þeim næstum fjörutíu árum frá því að hann kom þar fyrst til að heimsækja slóðir fjórmenninganna sem höfðu heillað hann í æsku. Tónlist í þættinum í dag: Hvíl í ró / Lay Low og Fjallabræður (Lay Low) Here Comes the Sun / The Beatles (George Harrison) Penny Lane / The Beatles (Lennon & McCartney) UMSJÓN GUÐRÚN GUNNARSDÓTTIR OG GUNNAR HANSSON
« Pour certains, le merch rapporte beaucoup plus que la musique »
Grâce aux succès récents et à l'engouement du public, le secteur du clip se structure, mais il est encore loin d'être une véritable industrie et ses acteurs voudraient voir la Côte d'Ivoire suivre le modèle des pays anglophones. De notre correspondante à Abidjan,Les nouveaux réalisateurs de clips ivoiriens ont voulu imiter ce qui se faisait de mieux chez leurs voisins anglophones, tentant de s'inspirer, notamment, des cartons nigérians, comme « Joro » de Wizkid (2019, 300M de vues), ou « Dumebi » de Rema (2019, 83M de vues).« Je m'inspire beaucoup de ce qui se fait au Ghana, confie Young Nouchi, directeur artistique et réalisateur depuis 2020 chez le label Coast 2 Coast, un des précurseurs de ce secteur. J'ai vu que la scène était assez mouvementée là-bas, donc j'ai ramené ça du Ghana. Le réalisateur qui m'a inspiré s'appelle David Duncan, j'ai réussi à taffer avec lui sur plusieurs projets. Dans notre label, on voyage beaucoup, on va faire des clips en France avec des grosses teams, au Ghana… Mais quand tu vois comment ça taffe là-bas et comment ça taffe ici, il y a vraiment une grosse différence. Il y a des bons talents, mais on va dire qu'il n'y a pas vraiment d'industrie. Mais on est sur la voie ! »Les clips ne sont jamais rentables à court terme, souligne l'un des pontes du secteur, Sheku Tall, qui dirige Coast 2 Coast, car les chaînes de télévision, les réseaux sociaux et les plateformes vidéo en ligne comme YouTube ne génèrent que peu, ou pas, de revenus. « C'est un gros investissement marketing. YouTube ne se monétise que sur des gros marchés, où ils peuvent avoir une régie publicitaire conséquente. Nous, on est un pays de 28 millions d'habitants et donc pas dans leur viseur, analyse Sheku Tall. Le Sénégal, lui, a été monétisé grâce à un Sénégalais qui travaillait chez Google. Le Sénégal était un pays d'art et de lettres depuis le président Senghor, qui a toujours misé sur les artistes plus qu'ici, mais grâce à ce monsieur, ils ont pu monétiser au Sénégal. Donc après, c'est à nos institutions, aux artistes, aux producteurs, c'est à tout le monde de se mettre en marche pour demander la même chose. »À lire aussiLe Juiice, «la Trap Mama», du côté d'Abidjan« On a encore beaucoup de réalités qui nous fatiguent »Plusieurs professionnels du secteur, dont Sheku Tall lui-même, tentent de faire entrer la Côte d'Ivoire sur l'énorme marché que représente YouTube. Mais en attendant, explique-t-il, difficile d'en faire un véritable business : « Je ne dirais pas qu'il y a une industrie, je dirais qu'on tend vers ça, mais qu'on a encore beaucoup de réalités qui nous fatiguent, et qui font qu'on n'est pas bien organisés. On n'a pas de subvention pour les clips, on n'a pas de formations, déplore le professionnel. Sans formation et sans fonds, c'est un peu dur de dire qu'on avance comme une industrie. Mais en termes de publicité, que ce soit pour l'artiste, pour le label, pour la ville où on shoote, c'est des messages forts qu'on lance, et après qui nous reviennent. Finalement, on le reprend quelque part. Peut-être pas dans le modèle économique, mais bon, il suffit d'en créer un ! »Abidjan est devenu le décor privilégié des clips de nombreux artistes francophones de la sous-région, mais aussi pour les rappeurs français, de plus en plus nombreux à venir y tourner : Kaaris d'abord, puis plus récemment Niska, Lala &ce ou Laylow. Et si des lieux comme le nouveau pont d'Abidjan sont rapidement devenus iconiques, c'est en partie grâce aux nombreux clips qui y sont tournés.
Today's Song of the Day is “Good Sun” from Eddie Chacon's album Lay Low, out now.
Gastgeber Andreas Müller diskutiert in dieser Woche zusammen mit den beiden Musikjournalisten Martin Böttcher und Christoph Reimann über folgende Neuerscheinungen: "Hurry Up Tomorrow" von The Weeknd, "Lay Low" von Eddie Chacon, "List Of Demands" von Damon Locks und "The Purple Bird" von Bonnie 'Prince' Billy.
Eddie Chacon shot to fame as part of the duo Charles & Eddie, alongside the late Charles Pettigrew, with the release of their enduring 1992 single ‘Would I Lie to You?’ In the years after Charles & Eddie amicably split in 1997, Chacon explored another passion of his: fashion photography. But music called him back and, in 2020, he released the solo album ‘Pleasure, Joy and Happiness’, followed by ‘Sundown’ in 2023. Now he has returned with the gorgeous new record ‘Lay Low’. On today’s show, Chacon joins Robert Bound in the studio to discuss his unique, heartfelt strain of R&B and the inspiration behind the dreamy tracks on his new record. See omnystudio.com/listener for privacy information.
This Day in Legal History: Pendleton Civil Service Reform ActOn January 16, 1883, the U.S. Congress enacted the Pendleton Civil Service Reform Act, a landmark piece of legislation that fundamentally transformed federal employment practices. The act was a response to widespread corruption and inefficiency in the government, fueled by the patronage or "spoils" system, which awarded jobs based on political loyalty rather than competence. Signed into law by President Chester A. Arthur, the Pendleton Act marked a critical shift toward merit-based hiring and promotion within the federal workforce.The law initially applied to only about 10% of federal jobs, requiring competitive examinations to determine qualifications. However, it granted the president authority to expand the classified service, allowing successive administrations to broaden its scope. The act also established the Civil Service Commission, the first federal agency tasked with overseeing adherence to these new standards of fairness and efficiency.This reform was catalyzed by public outcry following the assassination of President James A. Garfield in 1881 by a disgruntled office seeker. The tragedy underscored the dangers of a system rife with favoritism and incompetence, galvanizing bipartisan support for change. Over time, the principles of the Pendleton Act have become cornerstones of American civil service, contributing to the professionalization and stability of the federal government.By curbing patronage and introducing accountability, the act helped restore public trust in government operations. It also served as a model for state and local reforms and influenced broader discussions about the role of expertise in public administration. Today, the Pendleton Act is recognized as a foundational moment in the evolution of modern governance in the United States, laying the groundwork for a more impartial and effective civil service system.Victims of recent Los Angeles wildfires are leveraging California's unique legal doctrine of "inverse condemnation" to seek damages from Southern California Edison (SCE), even if the utility was not negligent. This doctrine, traditionally used against government entities for property damage, has been extended to utilities, making them liable for property damage caused during public service operations, regardless of fault. SCE is facing numerous lawsuits over the Eaton Fire, which destroyed thousands of structures and caused at least 24 deaths. Plaintiffs claim the fire originated near SCE's high-voltage transmission towers, although the company reports no operational anomalies on its lines before or during the fire.California law does not require plaintiffs to prove negligence for property damage claims under inverse condemnation. However, proving negligence could enable claims for personal injuries and wrongful death. The lawsuits cite substantial economic losses and damages exceeding insurance coverage. To mitigate financial impacts, a $21 billion state wildfire insurance fund is available, capping SCE's exposure at $3.9 billion.These cases, expected to take years to resolve, highlight the escalating legal and financial consequences for utilities in wildfire-prone areas.California utility faces billions in claims for fire damage even if it did nothing wrong | ReutersPam Bondi, nominated by Donald Trump for U.S. attorney general, assured the Senate Judiciary Committee that she would not politicize the Justice Department, but refused to rule out investigating Trump critics. Bondi, who previously served as Florida's attorney general and defended Trump during his 2019 impeachment trial, emphasized her focus on issues like violent crime and human trafficking while acknowledging she would evaluate investigations and potential pardons on a case-by-case basis.Democratic lawmakers expressed concerns about her independence, referencing Trump's pledge to target his adversaries and the dismissal of two past attorneys general who defied him. Bondi criticized Special Counsel Jack Smith's investigations into Trump as partisan but claimed she would maintain fairness. Republicans praised Bondi, urging her to restore the Justice Department's reputation and combat crime and border issues. Democrats questioned her involvement in promoting Trump's election fraud claims and her support for FBI director nominee Kash Patel, who has been linked to controversial conspiracy theories. Bondi acknowledged Biden's 2020 victory but suggested irregularities in Pennsylvania. The committee continues vetting other controversial cabinet nominees ahead of Trump's upcoming inauguration.Trump nominee Pam Bondi vows independence, but won't rule out probes of Trump critics | ReutersIn my column for Bloomberg this week I focus on the strategic risks of advocating for retirement account tax reforms during the anticipated extension of the Tax Cuts and Jobs Act (TCJA) provisions under a new Trump administration. Extending these provisions, a top priority, will cost an estimated $4.6 trillion over the next decade, creating a politically and fiscally sensitive environment where other tax code changes could face heightened scrutiny. The 403(b) retirement accounts, designed for public employees and nonprofit workers, are particularly vulnerable because of their association with significant tax expenditures, which totaled over $300 billion in 2022 and are projected to exceed $2 trillion by 2026. Advocates for reform in areas like expanding 403(b) investment options should avoid pushing these changes now, as drawing attention to retirement accounts could lead to cuts framed as cost-saving measures. History shows that retirement savings provisions are not immune to political pressure, with past examples including the TCJA's elimination of Roth IRA recharacterizations and narrowly avoided cuts to 401(k) benefits. In this high-stakes fiscal landscape, strategic patience is essential. Advocates are advised to focus on preserving existing provisions rather than risking unintended consequences by pursuing reform during an unfavorable political moment.Retirement Account Reformists Should Wait to Push Tax Code Changes This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Read along to practice your English and to learn the English phrases TO LAY LOW, TO LIE LOW, and A LOW BLOWIn this English lesson, I wanted to help you learn the English phrases to lay low or to lie low. This is one of the situations where we use lie and lay in the same way, and it means to hide. Usually you will hear these phrases if you're watching a show where the police are chasing some criminals and the criminals might decide to lie low or to lay low. Sorry, lie low, lay low. I should say them in the right order. Lay low. Sounds kind of funny when you say it, though. It kind of rolls off the tongue in a funny way. Anyways, the criminals might decide to lay low or to lie low. That means they're going to hide somewhere where the police can't find them.WANT FREE ENGLISH LESSONS? GO TO YOUTUBE AND SEARCH, "BOB THE CANADIAN"If you enjoy these lessons please consider supporting me at: http://www.patreon.com/bobthecanadianThe other phrase I wanted to teach you today is a low blow. So a low blow in boxing is if you punch below the waist. Like if you punch below the belt, it's considered a low blow. But we also use this phrase to talk about any behavior or action that's not really nice. So for instance, if I lent my brother $100 and told him, you can pay me back in a year, and then if I asked him for the $100 back tomorrow, that'd be a low blow. Like I'd be doing something that's not very nice and not very kind.So to review to lay low or lie low means to kind of hide out. Maybe you've stolen something and the police are after you, so you hide out at your cousin's place. You decide to lay low. You decide to lie low. And a low blow is anytime you do something to someone that's just not very nice and not something they were expecting.But hey, let's look at a comment from a previous video. I'm just going to read the beginning of this comment from Know that for the sake of time. It's a great comment, but I'll read the... we'll get the gist of it from Know that every now and then you squeeze the recording of your videos lessons for us into your lunch break. Bob, I was wondering if you plan these, let's say, trips or if you do them more spontaneously, depending on what's on your to do list for the day and then you can read the rest. But my response sometimes it is on a whim, sometimes it is intentional, sometimes it is just convenient. There isn't a rhyme or reason most of the time. Sometimes it just depends on the weather.So thanks Know that for that comment and that question. Yeah, sometimes it's just, you know, how I'm feeling that day. Or maybe I'm doing an errand. So I just jumped out of the van and do an English lesson as well. Today. I came out this way because it's just a nice area to do it. I know that some of you like seeing views of the farm and some of you like seeing little glimpses of Canadian life. You like seeing trucks drive by and you like seeing what's happening in my local town. So yeah, sometimes I do it intentionally.If you watch my Members Only video on my other channel today I did that intentionally. There's a new bakery in town, so I decided to walk past it. I didn't go in, but I do know that sometimes some of you like seeing just a little glimpse of Canadian life.There's a glimpse of Canadian life coming towards me right now. There's someone walking three dogs. So I think I might actually move into this driveway and let them go past. So I'm not afraid of dogs. But do you remember that one video where a dog almost bit me? That wasn't very enjoyable. Hi. How you doing? Good. There you can see the dogs going by. If I sounded extra cheerful with my hello, that's because I recognized that person. It's a former student of mine, so I didn't want to say her name, but she was a good student. VSupport the show
Former NFL offensive lineman and FOX Sports Radio Weekend host Ephraim Salaam is in for Kelvin, and he and Rob share their thoughts on the Los Angeles Lakers decision to pass on acquiring a 3rd star alongside LeBron James and beat Super Producer Rob G to a pulp in this week’s edition of The Hot Seat.See omnystudio.com/listener for privacy information.
On this episode: Kendrick Lamar album now GoldAndrew Schulz comments & aftermathStadium tickets for GNX sales extra datesMaster P son stealing Tank Davis retirement Denzel highest box office sales Lebron lost 30 mill Snack wrap is back! Scrubs reboot CEO snipedEuphoria still has hopes for 2026New Solange & Victoria Monet album comingDrake lawsuit with UMG Dec 10th?J. Cole inevitable -- born sinner process- Hov issues - Giving boulder Colorado our creditAnd MUCH MORE! --- Support this podcast: https://podcasters.spotify.com/pod/show/brandon-riddley/support
Maarten Devoldere und sein Soloprojekt Warhaus klangen ja schon immer ziemlich sexy. So heissblütig wie auf seinem vierten Soloalbum klang Devoldere, sonst einer der beiden Sänger bei Balthazar, aber noch nie. Könnte daran liegen, dass es nach dem verarbeiteten Break-up auf dem letzten Album («Ha Ha Heartbreak», 2022) nun höchste Zeit ist, mindestens einen Fuss wieder Richtung Datingpool auszustrecken. Und – so viel sei verraten – da dürfte «Karaoke Moon» den weitaus besseren Effekt haben als jede funny Tinder-Bio. +++ PLAYLIST +++ · 22:55 – THE SEA BRINGS, WAVES OF CASTED SILVER SOFTLY CRAWLS, INTO MOSS WE SINK von BEN KACZOR & NICULIN BARANDUN · 22:51 – MÉLANCOLIE von Lescop · 22:43 – TABLE DEATH SET von MORD FUZZTANG · 22:39 – SEEN TOO MUCH von MORD FUZZTANG · 22:37 – GUESS IT'S WRECKED von MOIN FEAT. OLAN MONK · 22:32 – ADRIFT von ARTHUR HNATEK · 22:30 – DURAN DURAN von WESTSIDE GUNN & DJ DRAMA · 22:23 – TV OFF von KENDRICK LAMAR FEAT. LEFTY GUNPLAY · 22:19 – SEXY CLOWN von Marie Davidson · 22:14 – BODYS CHORUS von SKELETEN · 22:10 – LAY LOW von CASANORA · 21:55 – THE REASON von MOUNT JACINTO · 21:46 – MAHASHMASHANA von FATHER JOHN MISTY · 21:37 – SCREAMLAND von FATHER JOHN MISTY · 21:32 – WHAT GOES UP von WARHAUS · 21:28 – I'M THE GHOST YOU FORGOT von J. BERNARDT · 21:22 – ZERO ONE CODE von WARHAUS · 21:19 – MEMORY / LIVE @ SRF 3 von WARHAUS · 21:14 – HANDS OF A CLOCK von WARHAUS · 21:11 – SWEET LOVE von SYLVIE KREUSCH · 21:06 – NO SURPRISE von WARHAUS · 21:03 – SINKING SHIP von BALTHAZAR
Klar, wir können das schon «Neo-»Soul nennen, was der britische Songwriter auch auf seinem vierten Studioalbum wieder macht. Aber natürlich steckt da auch viel «Retro» drin. Trotzdem werden sämtliche Nostalgiefallen elegant übersprungen. Mit seiner letzten Platte «Kiwanuka» (2019) gewann der Musiker aus London den begehrten «Mercury Prize» (zu den Mitnominierten gehörten u. a. Stormzy, Charli XCX, Dua Lipa oder das Sports Team), also gab es eigentlich auch kaum Gründe, für die vierte Platte all zu viele «Changes» vornehmen zu müssen. Erneut stand das gleiche Team hinter dem Studiopult (Inflo von Sault und Brian Burton alias Danger Mouse), nur an der Bassgitarre gab's eine kleine Änderung: Koryphäe Pino Palladino (D'Angelo, The Who, Eric Clapton) spielte die wunderbar warmen Basslinien des Albums ein. Und ja, eigentlich erscheint diese Platte erst *diesen* Freitag – weil sie kurzfristig um eine Woche verschoben wurde. Wir machen «Small Changes» aber trotzdem ab sofort zu unserem Album der Woche und verlosen sie täglich auf Vinyl. +++ PLAYLIST +++ · 22:56 – 2468 von HORSEGIRL · 22:51 – YOU GOT ME SEARCHING von JACK WHITE · 22:46 – STAY HERE von FORT ROMEAU & GOLD PANDA · 22:42 – CASPIAN TIGER von BEIRUT · 22:38 – GETTING REMINDERS von EFTERKLANG FEAT. BEIRUT · 22:34 – WE MUST HAVE BEEN ASLEEP von AINO SALTO · 22:30 – WINDOW HOP von SUBAQUA · 22:26 – NIGHT OR DAY von FRANZ FERDINAND · 22:22 – WHAT YOU MEANT von FRANZ FERDINAND · 22:18 – THE BLOOD RETURNS von CASANORA · 22:12 – LAY LOW von CASANORA · 22:09 – THE HARDEST BUTTON TO BUTTON von THE WHITE STRIPES · 21:57 – ELEPHANT von 070 SHAKE · 21:54 – SIN von 070 SHAKE · 21:47 – ESCAPISM. von RAYE FEAT. 070 SHAKE · 21:45 – BOXES IN MY BASEMENT von ERICK THE ARCHITECT · 21:39 – CA$HMERE TEAR$ von ERICK THE ARCHITECT · 21:34 – N.Y. STATE OF MIND von NA · 21:27 – BUSS DOWN von DAVE EAST & ARAABMUZIK FEAT. FABOLOUS · 21:22 – BREATHING von MARY J. BLIGE FEAT. FABOLOUS · 21:19 – AFRIKAN DI ALIEN von PA SALIEU FEAT. BLACK SHERIF · 21:13 – STYLE & FASHION von PA SALIEU FEAT. OBONGJAYAR · 21:10 – POINT & KILL von LITTLE SIMZ FEAT. OBONGJAYAR · 21:06 – PRAY von CORDAE FEAT. TY DOLLA $IGN · 21:03 – SUMMER DROP von CORDAE FEAT. ANDERSON.PAAK · 20:55 – SORROW von JOE ARMON-JONES FEAT. LIAM BAILEY · 20:49 – AJALA von EZRA COLLECTIVE · 20:46 – BLACK MAN IN A WHITE WORLD von MICHAEL KIWANUKA · 20:42 – REBEL SOUL von MICHAEL KIWANUKA · 20:38 – LOWDOWN (Part i) von MICHAEL KIWANUKA · 20:33 – AIN'T THAT EASY von D'ANGELO & THE VANGUARD · 20:28 – THE REST OF ME von MICHAEL KIWANUKA · 20:25 – SMILEY FACES von GNARLS BARKLEY · 20:20 – WILDFIRES von SAULT · 20:17 – FLOATING PARADE von MICHAEL KIWANUKA · 20:10 – COLD LITTLE HEART von MICHAEL KIWANUKA · 20:07 – WORLD ON A STRING von JESSICA PRATT · 20:03 – LIFE IS von JESSICA PRATT
Hometown hero Eddie Chacon, who you might know as half of the duo Charles & Eddie via their ‘90s mega-hit “Would I Lie To You,” is back with his second solo album for Stones Throw Records. Thirty years after the heyday of his pop success and on the heels of his 2023 LP Sundown, Chacon’s new album Lay Low is due on Jan. 31, 2025. As a bonus, it’s produced by one of our other favorite singers — Nick Hakim. “Empire” is a stunningly soulful track featuring the multi-talented LA staple John Carroll Kirby.
Ellen McFarlane --- Support this podcast: https://podcasters.spotify.com/pod/show/aei-leon/support
On this episode we talked with Dan from Laylow Brewery! We discussed everything going on with Laylow, along with a review of best music of 2024.... so far. Beer(s) of the week Zenith — Belgian Golden Ale 6.3% ABV - Laylow Brewery Shandy - German Style Lager with Lemonade 4.7% ABV - Old Nation Brewing Co. Show Notes 0:00 - Intro 1:50 - Beer 6:40 - Catching up on Laylow news 15:46 - Best verse of the year so far... 21:00 - Best track of the year so far.. 27:38 - Best album of the year so far... 33:43 - Who's winning the year so far? 39:40 - Predictions for remainder of the year Follow us everywhere @beerzandbarz
Welcome back y'all - what a week! This week our #OTWEEKLYPLAYLISTS has sounds from Billie Eilish, Vedo, Vanilla Is Black & 6lack, and Rapsody! During #MUSICNEWS of course we get into the egregious tape "leaked" last week of Sean "Diddy" Combs caught on camera domestically abusing Cassie, and the latest allegations from the fall out. We also discuss Apple Music's 100 Greatest Albums of all time list. In #THEBLACKNESS - we talk through this year's 2024 BET Award Nominations, and we also shine a #QUEENSPOTLIGHT on Simone Biles, Gabby Douglas, & 18 year old Dr. Dorothy Jean-Tillman who received her associates, bachelors, and masters by the age of 14! Follow Us: All Links: https://linktr.ee/otwweekly Instagram/Twitter: @onthewayweekly FB: facebook.com/onthewaypod | Youtube: https://bit.ly/3CWxgPZ Website: instinctent.com/ontheway | www.mochapodcastsnetwork.com/ontheway Sylvee - @sylveejones Kahlil - @kahlilxdaniel | www.kahlildaniel.com | www.facebook.com/kxdmusic Tap in to our latest playlists too! Learn more about your ad choices. Visit megaphone.fm/adchoices
MixTape 098 - Summer 2024 DJ Beats Mix TRACK 1 AUDIO TITLE "Another Day In Paradise" PERFORMER "Alex Grey, Emily Dawn, Coral Reef" INDEX 01 00:00:00 TRACK 2 AUDIO TITLE "Mwaki" PERFORMER "ZERB, Sofiya Nzau" INDEX 01 02:20:51 TRACK 3 AUDIO TITLE "Rock My Body" PERFORMER "R3HAB, INNA, Sash!" INDEX 01 05:28:62 TRACK 4 AUDIO TITLE "Don't Let Me Be Misunderstood" PERFORMER "SRNDE, YKATI" INDEX 01 06:58:09 TRACK 5 AUDIO TITLE "Baby Don't Hurt Me" PERFORMER "David Guetta, Anne-Marie, Coi Leray" INDEX 01 09:08:14 TRACK 6 AUDIO TITLE "Relax" PERFORMER "Alex Grey, House Arrest, Bikini Bandits" INDEX 01 10:46:17 TRACK 7 AUDIO TITLE "Rule The World (Everybody)" PERFORMER "Tiësto, Tears For Fears, NIIKO X SWAE, Gudfella" INDEX 01 12:16:17 TRACK 8 AUDIO TITLE "Corazón" PERFORMER "Andyrave" INDEX 01 14:55:45 TRACK 9 AUDIO TITLE "Lay Low" PERFORMER "Tiësto" INDEX 01 16:51:14 TRACK 10 AUDIO TITLE "The Riddle" PERFORMER "Last Call, Sundays, Bikini Bandits" INDEX 01 19:04:33 TRACK 11 AUDIO TITLE "Lemonade" PERFORMER "Jay Dixie, DVSK, Tremble" INDEX 01 21:15:44 TRACK 12 AUDIO TITLE "Umbrella" PERFORMER "The Him" INDEX 01 23:46:07 TRACK 13 AUDIO TITLE "Satisfaction" PERFORMER "David Guetta, Benny Benassi" INDEX 01 26:03:62 TRACK 14 AUDIO TITLE "Summertime Sadness" PERFORMER "The Him" INDEX 01 28:01:74 TRACK 15 AUDIO TITLE "Superstar" PERFORMER "Bikini Bandits, Valentina Star, Jack Fruit" INDEX 01 31:08:26 TRACK 16 AUDIO TITLE "10:35" PERFORMER "Tiësto, Tate McRae" INDEX 01 33:02:06 TRACK 17 AUDIO TITLE "Say It Right" PERFORMER "Last Call, Sundays, Bikini Bandits" INDEX 01 35:43:06 TRACK 18 AUDIO TITLE "Rolling In The Deep" PERFORMER "Last Call, Sundays, Bikini Bandits" INDEX 01 37:55:44 TRACK 19 AUDIO TITLE "Be My Lover (feat. La Bouche) (2023 Mix)" PERFORMER "David Guetta, Hypaton, La Bouche" INDEX 01 40:41:07 TRACK 20 AUDIO TITLE "Carry You" PERFORMER "Martin Garrix, Third Party, Oaks, Declan J Donovan" INDEX 01 42:23:54 TRACK 21 AUDIO TITLE "I'm Good (Blue)" PERFORMER "David Guetta, Bebe Rexha" INDEX 01 45:05:31 TRACK 22 AUDIO TITLE "(It Goes Like) Nanana (Edit)" PERFORMER "Peggy Gou" INDEX 01 47:47:19 TRACK 23 AUDIO TITLE "Crying On The Dancefloor" PERFORMER "Sam Feldt, Jonas Blue, Endless Summer, Violet Days" INDEX 01 48:58:30 TRACK 24 AUDIO TITLE "On & On" PERFORMER "Armin van Buuren, Punctual, Alika" INDEX 01 51:03:47 TRACK 25 AUDIO TITLE "Missing" PERFORMER "Alvin Anthony, BLUTH" INDEX 01 53:06:38 TRACK 26 AUDIO TITLE "Stumblin' In" PERFORMER "Cyril" INDEX 01 54:41:50 TRACK 27 AUDIO TITLE "Lovely Day" PERFORMER "Bikini Bandits, Alex Grey, House Arrest" INDEX 01 57:40:48 TRACK 28 AUDIO TITLE "I'm Coming Out" PERFORMER "Bikini Bandits, Alex Grey, Emily Dawn" INDEX 01 59:26:46 TRACK 29 AUDIO TITLE "Rise Up" PERFORMER "summer sax, Poolside Pirates, Islnd" INDEX 01 61:13:30 TRACK 30 AUDIO TITLE "Sunrise" PERFORMER "Tod Allen, Sonja" INDEX 01 63:39:20
KSL TV reporter Lindsay Aerts Sports Roulette Final thoughts
In 2024, it's never been a tougher time for craft breweries and brewpubs, but for Toronto's Laylow Brewery, the recent closure of their taproom opened up amazing new possibilities. Co-Founders Dan and Colin joined Cee at home in Hamilton to chat about their first canned beer ever thanks to Avling Brewery, the pivot from brewpub to beer and lifestyle brand, the impact the community has had on them, why they chose to brew different beers to what's currently hyped, how they work with the Hip Hop community, how they approach their clothing brand, their next steps and their Top 5 rappers and rap groups of all time (one of our favourite segments if it ever comes up). They crushed their brand new Zenith Belgian Golden Ale and a relic Lightworks Hibiscus Wheat Beer bottle. This was fire, enjoy! BAOS Podcast Subscribe to the podcast on YouTube | Website | Theme tune: Cee - BrewHeads
We can be tempted by a lot of things we shouldn't give in to. But when we yield to humility, the world changes for the better. Read along with the message: 1 Peter 5:5; Matthew 14:15–16; John 6:14–15; 13:3–4, 12–13, 15; Philippians 2:6–8
Dan, the co-owner of Laylow Brewery, joins the podcast to talk about creating beer, music, gear, and a unique meeting place along with connections in the community. He lets us in on what's next for Laylow, including what's brewing, talks about his family roots, being a dad, and of course, what's on feet. Host: Jon Ratner @headzaintredee jratner@gmail.com @sneakerdads Beats: Chili Banks
durée : 00:05:15 - Dans la playlist de France Inter - Le rappeur S.Pri Noir vient de sortir son 3e album et s'apprête à donner un concert sur France Inter, le 28 février. Autant de bonnes raisons de faire le tour de sa déjà très riche discographie.
Our guest this week is a designer for whom discovery is at the heart of what she does. Her design skills have been featured on HGTV as the co-host of Aloha Homes, and her firm has won multiple awards, including two gold key awards. Joining the show is President and Creative Director of The Vanguard Theory, Michelle Jaime!Michelle joins host Dan Ryan to discuss Hawaii's unique culture in terms of hospitality. Michelle also shares her experiences as an entrepreneur, and dives into the need for authentic representation of Hawaiian culture in design.Takeaways: Hospitality in Hawaii is ingrained in the culture, where locals often adopt and warmly embrace visitors, sharing their traditions, food, and experiences. Genuine care and immersing guests are authentic aspects of life, both on a personal and industry level.Hawaiian firms often design Hawaiian projects, but rarely design mainland projects, hindering their recognition and opportunities outside of Hawaii. This disparity may be attributed to misconceptions about distance and cultural specificity.To gain recognition and attract profitable opportunities, it is important for small businesses to showcase their achievements. Building successful partnerships and establishing a strong reputation can lead to further growth beyond the local market.When working with clients, it's important to build strong relationships and understand their goals, budgets, and timelines. By designing around constraints and considering logistics, you create high-quality projects that meet expectations regardless of location.It is crucial for non-local design firms working on projects in Hawaii to be sensitive to the local culture and avoid appropriation. Engaging with local fabricators, artists, and designers is essential to ensure an authentic representation of the community.Being cautious about scaling and expanding too quickly is crucial, especially in areas where external factors can significantly impact businesses and lead to layoffs. Prioritizing sustainable growth can help prevent potential negative consequences.Many people face the challenge of lacking business guidance and mentorship, relying on failure to learn and overcome obstacles. Despite the absence of a mentor, the experience of navigating failures can drive personal growth and resilience.Quote of the Show:“If we don't engage the community, that's not really authentic.” - Michelle JaimeLinks:LinkedIn: https://www.linkedin.com/in/michelle-jaime-487a5b4/ Website: https://www.thevanguardtheory.com/ Shout Outs:1:05 - The Laylow: https://www.marriott.com/en-us/hotels/hnlak-the-laylow-autograph-collection/overview/ 4:26 - Philpotts Interiors: https://www.philpotts.net/ 4:28 - Jonathan Staub: https://www.linkedin.com/in/jonathan-staub-a8486914/?originalSubdomain=no 4:30 - Marion Philpotts-Miller: https://www.linkedin.com/in/marion-philpotts-miller-74127111/ 5:21 - Kelsey Grammer5:44 - Fraiser6:35 - Hirsch-Bedner: https://www.hba.com/ 7:37 - HGTV: https://www.hgtv.com/ 7:40 - David Jaime: https://www.linkedin.com/in/david-jaime-430a1480/ 9:13 - Shaleah Soliven: https://www.linkedin.com/in/shaleah-soliven-03191726/ 12:07 - Rockywold Deephaven: https://www.rdcsquam.com/ 18:12 - Gensler: https://www.gensler.com/ 19:37 - Independent Lodging Congress: https://ilcongress.com/ 19:38 - BDNY: https://bdny.com/ 19:40 - Hospitality Design: https://hospitalitydesign.com/ 19:48 - Gold Key Awards: https://goldkeyawards.com/ 20:29 - The Surfjack Hotel: https://surfjack.com/ 21:14 - Green Oak21:19 - Ben Rafter: https://www.linkedin.com/in/ben-rafter-623a702/ 21:26 - Erik Warner: https://www.linkedin.com/in/erik-warner-3139175/ 21:27 - Stephen Chen22:52 - Billy Madison22:56 - Adam Sandler28:08 - White Sands Hawaii: https://www.whitesandshotel.com/ 28:11 - Hotel Renew: https://hotelrenew.com/ 35:02 - Westin Maui: https://www.marriott.com/en-us/hotels/hnmwi-the-westin-maui-resort-and-spa-kaanapali/overview/ 35:32 - JMI Realty: https://www.jmirealty.com/ 38:06 - Small Giants: https://a.co/d/io2lBqt 39:57 - Coach Mackey42:02 - Susan Cain: https://susancain.net/ 42:04 - Quiet: https://a.co/d/38TjZQO Ways to Tune In: Spotify: https://open.spotify.com/show/0A2XOJvb6mGqEPYJ5bilPXApple Podcasts: https://podcasts.apple.com/us/podcast/defining-hospitality-podcast/id1573596386Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly93d3cuZGVmaW5pbmdob3NwaXRhbGl0eS5saXZlL2ZlZWQueG1sAmazon Music: https://music.amazon.com/podcasts/8c904932-90fa-41c3-813e-1cb8f3c42419
Слушай итоговый выпуск Top Club Chart — 50 лучших электронных хитов 2023 года. 1. Jengi - Bel Mercy (Faustix Remix) (50 место) 2. Oliver Tree & Robin Schulz - Miss You (Tom Budin Remix) (49 место) 3. Coi Leray - Players (David Guetta Remix) (48 место) 4. James Mac & Vall ft. Rosalie - The Boy Is Mine (47 место) 5. Jain - Makeba (Ian Asher Remix) (46 место) 6. Argy & Omnya - Aria (45 место) 7. DJ Snake & Wade ft. Nooran Sisters - Guddi Riddim (44 место) 8. MK & Sonny Fodera ft. Clementine Douglas - Asking (43 место) 9. Dimitri Vegas & Steve Aoki - The White Lotus Theme (Aloha) (42 место) 10. Steve Angello & Wh0 - What You Need (41 место) 11. Alexander Popov & Chester Young & Whiteout - Overtaking (VIP Mix) (40 место) 12. Billy Gillies feat. Hannah Boleyn - DNA (Loving You) (39 место) 13. Larse - A Part Of (Riva Starr Saturn Mix) (38 место) 14. James Hurr & Mark Knight vs. Modjo - Lady (37 место) 15. BYOR & Shift K3Y - Whistle (36 место) 16. Crazibiza - Fresh (House of Prayers Poolside Edit) (35 место) 17. BLOND:ISH, Madonna, Eran Hersh & Darmon - Sorry (34 место) 18. D.O.D - Set Me Free (33 место) 19. Rezone & MACROLEV - El Ritmo (32 место) 20. Creeds - Push Up (31 место) 21. 30.ALOK & James Arthur - Work With My Love (Mark Knight Remix) (30 место) 22. Mochakk - Jealous (29 место) 23. James Hype ft. Kim Petras - Drums (28 место) 24. Skrillex & Mr. Oizo ft. Missy Elliott - RATATA (27 место) 25. Calvin Harris & Sam Smith - Desire (Don Diablo Remix) (26 место) 26. cassö x RAYE x D-Block Europe – Prada (25 место) 27. Anyma & Chris Avantgarde - Eternity (24 место) 28. DJ KUBA & NEITAN x Bounce Inc. - Work My Body (23 место) 29. Mau P & Kevin de Vries - Metro (22 место) 30. FISHER ft. Kita Alexander - Atmosphere (21 место) 31. Grigoré - El Tiempo (20 место) 32. Skrillex & Boys Noize - Fine Day Anthem (19 место) 33. deadmau5 & Kaskade ft. Haley Gibby - I Remember (John Summit Remix) (18 место) 34. Tiësto - Lay Low (17 место) 35. Swedish House Mafia ft. Fridayy - See The Light (16 место) 36. Mau P - Dress Code (15 место) 37. ESSEL - Sweat (14 место) 38. John Summit & Hayla - Where You Are (13 место) 39. Swedish House Mafia - Ray Of Solar (Mau P Remix) (12 место) 40. TECH IT DEEP - Maria Maria (11 место) 41. Alex Wann - Milkshake (10 место) 42. Fred again.. & Skrillex & Four Tet ft. Lil Baby - Baby Again.. (9 место) 43. Noizu & Westend ft. No/Me - Push To Start (8 место) 44. A'studio, Polina – SOS (Skylark Remix Nic Fanciulli Edit) (7 место) 45. MK & Dom Dolla - Rhyme Dust (6 место) 46. Mau P - Gimme That Bounce (5 место) 47. FISHER & Aatig - Take It Off (4 место) 48. Argy & Goom Gum - Pantheon (3 место) 49. Calvin Harris & Ellie Goulding - Miracle (2 место) 50. Peggy Gou - (It Goes Like) Nanana (1 место)
To kick off 2024, Cee and Tiff hung out for a deep dive into the multitude of reasons behind the alarming number of brewery closures that we've seen over the last 12-18 months. They looked at a number of the Ontarian and Canadian breweries that closed recently, and how several factors influenced where we're at currently including the pandemic and incessant shutdowns, a rise in operational costs (mostly rent and ingredients), market saturation and a change in drinking preferences, government regulation and taxation, the Master Framework Agreement, and products that don't seem to work. They also touched on opportunities for the future including regulatory changes, intentional marketing, diversity and equality initiatives, product diversity and how consumers can support. They cracked two brews from now-defunct breweries - Laylow's Zenith Belgian Golden Ale and Still Fields' Hoppy Saison. Tons of learnings here - cheers! Links as per the conversation: RTD Segment Growth Master Framework Agreement Changes Ontario Tax Reform Keep Craft Beer Local Ontario Beer Taxation OCB Advocacy BAOS Podcast Subscribe to the podcast on YouTube | Website | Theme tune: Cee - BrewHeads
01. Dua Lipa – Houdini 02. Meysta, 2shy, Viktoria Vane Feat. Beccy – Can't Get You Out Of My Head 03. Annabell Kowalski – Hey Boy Hey Girl 04. Ely Oaks, Minelli – Fantasy 05. Fabiasco, Perfect Pitch – Loosen Up My Buttons 06. Arnon, Jonisa – G Paradise 07. Inna – Flashbacks (Ramirez & Yudzhin Remix) 08. Oliver Heldens, DJs From Mars Feat Jd Davis – Blue Monday 09. Basto – I Rave You 10. Eelke Kleijn – Transmission (Joris Voorn Remix) 11. Galwaro, Lizot, Gabry Ponte Feat. Charla K – Like A Prayer 12. Bassjackers – Wrong Or Right (The Riddle) 13. Lost Frequencies, Elley Duhe & X Ambassadors – Back To You 14. Pbh & Jack, Alex Hosking – Lost In The Moment 15. Firebeatz – Don't Stop Moving 16. Drenchill Feat. Indiiana – Feel This Way 17. Afrojack & R3hab Feat. Aura – Worlds On Fire 18. Sean Finn – Love And Pride 19. Filv, Vallhee – Cheri, Cheri Lady 20. Felix Jaehn Feat. Alma – Bonfire (Holderz Remix) 21. Zhu – Faded 22. Chico Rose Feat Afrojack & Mougleta – Alone Again 23. Rompasso & Kddk – Isaura 24. James Hype & Miggy Dela Rosa – Ferrari 25. Gayle – Abc (Fät Tony & Medun Remix) 26. Meduza Ft. Becky Hill, Goodboys – Lose Control (Andy Jarvis Remix) 27. Lu Kala – Hotter Now (Denis First Remix) 28. DJ Peretse X Koysina – Sky 29. Joel Corry, Icona Pop, Rain Radio – Desire 30. Matt Nash – Know My Love 31. Matte Black – Thriller X 32. Tiësto – Lay Low 33. David Guetta – Family Affair (Dance For Me) 34. Calvin Harris, Ellie Goulding – Miracle (Denis First Remix) 35. Martin Garrix And Dua Lipa – Scared To Be Lonely (Pride Remix) 36. Klaas – Fable 37. Renomty, Smola, Aaron Kaye – Stereo Love 38. Noize Generation, Stefy De Cicco – Faded 39. Elley Duhe, Denis First – Middle Of The Night 40. Armin Van Buuren Billen Ted Feat Jc Stewart – Come Around Again 41. Atb, Topic, A7s – Your Love (9pm Ramirez & Yudzhin Remix) 42. Lykke Li – I Follow Rivers (Andrey Vertuga Reboot) 43. Cassette – My Way (Ramirez & Yudzhin Remix) 44. David Guetta Feat Raye – Stay (Don't Go Away)(Mephisto Remix) 45. Dimitri Vegas & Like Mike Feat. Azteck & Hayley May – Heaven 46. Capital Cities – Safe And Sound 47. Vanotek, Denitia, Arroy. Sergey Raf – Someone 48. Le Pedre & Djs From Mars & Mildenhaus – Trouble So Hard 49. Kungs – Never Going Home 50. Lilly Wood & The Prick – Prayer In C (Robin Schulz Remix) 51. Moses, Emr3ygul Feat. Alexiane – A Million On My Soul (Remix) 52. Carla's Dreams – Sub Pielea Mea (Midi Culture Remix) 53. Calvin Harris Feat. Rihanna – This Is What You Came For 54. Denis First – Feel What You Want 55. Minelli – Nothing Hurts 56. Faruk Sabanci Ft. Mingue – Your Call (Amice Remix) 57. Farruko – Pepas (Robin Schulz Remix) 58. Don Diablo & Paolo Pellegrino – Dangerous 59. Adam Lambert – Ghost Town (Dave Winnel Remix) 60. Black Eyed Peas, Shakira – Girl Like Me (Maxx Lyon X Cole Mac Remix) 61. Example – Changed The Way You Kiss Me (Valeriy Smile Remix) 62. David Guetta, Becky Hill & Ella Henderson – Crazy What Love Can Do 63. Kshmr, Tigerlily – Invisible Children 64. Jkrs, Aizzo – Hung Up 65. Robert Cristian Feat. Alis Shuka – The Day Before (Valeriy Smile Remix) 66. Block & Crown, Lissat – Ocean Cake 67. Imanbek, Dvbbs – Ocean Of Tears 68. Yellow Claw Feat. Rochelle – Shotgun (Ps_Project & Danil Siyanov Remix) 69. Calvin Harris – Feel So Close (Bermuda Remix) 70. David Guetta, Dimitri Vegas Vs Nicole Sherzinger, Azteck – The Drop 71. Alle Farben – Bad Ideas (Denis First & Reznikov Remix) 72. Phao And Kaiz – 2 Phut Hon 73. Block & Crown, Atilla Cetin – How Many Nations
The Latent Space crew will be at NeurIPS on Tuesday! Reach out with any parties and papers of interest. We have also been incubating a smol daily AI Newsletter and Latent Space University is making progress.Good open models like Llama 2 and Mistral 7B (which has just released an 8x7B MoE model) have enabled their own sub-industry of finetuned variants for a myriad of reasons:* Ownership & Control - you take responsibility for serving the models* Privacy - not having to send data to a third party vendor* Customization - Improving some attribute (censorship, multiturn chat and chain of thought, roleplaying) or benchmark performance (without cheating)Related to improving benchmark performance is the ability to use smaller (7B, 13B) models, by matching the performance of larger models, which have both cost and inference latency benefits.Core to all this work is finetuning, and the emergent finetuning library of choice has been Wing Lian's Axolotl.AxolotlAxolotl is an LLM fine-tuner supporting SotA techniques and optimizations for a variety of common model architectures:It is used by many of the leading open source models:* Teknium: OpenHermes, Trismigestus, CollectiveCognition* OpenOrca: Mistral-OpenOrca, Mistral-SlimOrca* Nous Research: Puffin, Capybara, NousHermes* Pygmalion: Mythalion, Pygmalion* Eric Hartford: Dolphin, Samantha* DiscoResearch: DiscoLM 120B & 70B* OpenAccess AI Collective: Manticore, Minotaur, Jackalope, HippogriffAs finetuning is very formatting dependent, it also provides prompt interfaces and formatters between a range of popular model formats from Stanford's Alpaca and Steven Tey's ShareGPT (which led to Vicuna) to the more NSFW Pygmalion community.Nous Research MeetupWe last talked about Nous at the DevDay Recap at the e/acc “banger rave”. We met Wing at the Nous Research meetup at the a16z offices in San Francisco, where they officially announced their company and future plans:Including Nous Forge:Show NotesWe've already covered the nuances of Dataset Contamination and the problems with “Open Source” in AI, so we won't rehash those topics here but do read/listen to those if you missed it.* Axolotl GitHub and Discord* The Flan paper and dataset* StackLlama model and blogpost* Multipack paper* Our episode with Tri Dao* Mamba state space models - Tri Dao and Albert GuTimestamps* [00:00:00] Introducing Wing* [00:02:34] SF Open Source AI Meetup* [00:04:09] What is Axolotl?* [00:08:01] What is finetuning?* [00:08:52] Open Source Model Zoo* [00:10:53] Benchmarks and Contamination* [00:14:29] The Case for Open Source AI* [00:17:34] Orca and OpenOrca* [00:23:36] DiscoLM and Model Stacking* [00:25:07] Datasets and Evals over Models* [00:29:15] Distilling from GPT4* [00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ* [00:41:55] Axolotl vs HF Transformers* [00:48:00] 20x efficiency with StackLlama and Multipack* [00:54:47] Tri Dao and Mamba* [00:59:08] Roadmap for Axolotl* [01:01:20] The Open Source AI CommunityTranscript[00:00:00] Introducing Wing Lian[00:00:00] [00:00:00] swyx: Welcome to Latent Space, a special edition with Wing Lien, but also with our new guest host, Alex. Hello, hello. Welcome, welcome. Again, needs no introduction. I think it's like your sixth time on Latent Space already. I think so, yeah. And welcome, Wing. We just met, but you've been very prolific online. Thanks for having me.[00:00:30] Yeah. So you are in town. You're not local. You're in town. You're from Minneapolis?[00:00:35] Wing Lian: Annapolis. Annapolis. It's funny because a lot of people think it's Indianapolis. It's I've got Minneapolis, but I used to live out at least in the San Francisco Bay Area years ago from like 2008 to 2014. So it's fairly familiar here.[00:00:50] swyx: Yep. You're the maintainer of Axolotl now, which we'll get into. You're very, very prolific in the open source AI community, and you're also the founder of the Open Access AI Collective. Yeah. Cool. Awesome. Maybe we can go over a little bit of your backgrounds into tech and then coming into AI, and then we'll cover what[00:01:06] Wing Lian: happens and why you're here.[00:01:08] Yeah. So. Back on tech, so I started years ago, I started way back when I was scraping, Apartment websites for listings and then, and then building like SEO optimized pages and then just throwing Google AdSense on it.[00:01:24] And that got me through like college basically. Is[00:01:27] swyx: that decent money? And what year[00:01:28] Wing Lian: was this? Like 2004, 2005. Yeah, that's decent money. It's like thousand bucks a month. But as a college student, that's like. Gravy. Really good money, right? So, and then there's just too much competition It's just sort of like died off. I was writing stuff in like Perl back then using like like who nobody hosted anything on Perl anymore, right? Still did a little bit more like computer tech support and then software, and web more professionally.[00:01:54] So I spent some time working on applications in the blood industry. I came out to San Francisco for, I was at SGN, so Social Gaming Network, as a startup. They started doing, with Facebook apps, and then they pivoted into doing mobile apps. And then, from there, I spent time.[00:02:14] I've quite a few more startups since then and in the last few years I've been in the music space So like I was at United Masters for a while and then past year I've been at SoundCloud, but not doing that anymore and now that I have a lot more time It's just like all right.[00:02:30] We're going full bore on axolotl and we're gonna we're gonna crush AI So yeah,[00:02:34] SF Open Source AI Meetup[00:02:34] swyx: totally you so you're here in town for the open source. Yeah, I meet up that we had yesterday Yep, yeah, that was amazing. Yeah, it was a big collection. Olama, Noose Research, Alignment Lab, Anyone else that I missed? I mean, Jeremy Howard is his own thing.[00:02:47] Yeah.[00:02:49] And Alex, you're also there. You love to bring SF to the world. Your takes?[00:02:55] Alex Volkov: It's incredible that we recorded a Thursday Eye episode after that one. And LDJ, who's usually co hosts Thursday Eye, just like briefly mentioned, Oh yeah, I talked about it.[00:03:04] Like, I saw Karpathy, and then I talked to Jeremy Howard, and the guy from Mistral came in, and it's like, He's talking about all these, titans of industry, basically, that outside of SF, You just don't meet casually hanging out in the same space. You can't, pull somebody. He ran into the Laylow from Mistral, he ran into him while, drinking water.[00:03:20] He didn't even know he was there. It's just, that type of stuff is really hard to find outside of SF. So, absolutely, absolutely great. And also, presentations from Alignment Labs, presentations from News Research, news issues, talked about. Forge, and some of[00:03:33] swyx: the other stuff they announced. We can say now they're officially a company.[00:03:36] I met Technium.[00:03:37] He[00:03:37] Alex Volkov: came over here. He didn't want to get recorded. But maybe.[00:03:41] Wing Lian: We'll wear him down at some point. Yeah, I'm excited for Forge. They've positioned it as this agentic sort of framework where it's just Drag and drop things and, fill in text with where you want to inject different variables and it opens up all of these potentials for data pipelines now, right?[00:03:56] And using your own local LLMs and not relying on GPT 4 or anything like that. Yeah, yeah,[00:04:02] swyx: good stuff. Okay, so let's maybe go into the Axolotl origin story and then we have, we have some intro or background.[00:04:09] What is Axolotl?[00:04:09] swyx: To do on like the open source model universe and also on fine tuning, but maybe just, since you're talking about your personal journey, what was your personal journey into[00:04:18] Wing Lian: axolotl?[00:04:19] Yeah, so my personal journey started like back in mid March, completely unrelated to AI and axolotl. And it really started, I fell while skiing, I torqued. Great 3 MCL sprain and being sort of like an active person that can no longer be active because the two, couldn't play soccer, because that is requires to have having knees until I, it's healed.[00:04:42] So I. I decided I needed to find something to do to take up my free time. And that became, well, let's learn how to train in, these language models. It was everywhere. So I was like, all right, I'm just going to sit down, learn. I think I used like other, I think I was using like Alpacalora.[00:05:00] Cause I think the Alpaca paper had just came out, come out then. So I was like using Alpacalora repo and sort of like learning how to use like. None of us were like GPU rich back then, and none of us, most of us still we're still all GPU poor, but I was doing what was it, like 4 bit, Alpaca Lord, there was like a 4 bit version where we were doing quant, or 8, no, 8 bit quantizations, and then I think they had released QLOR a little bit later, and I think right when, before QLOR came out, I was already starting to do fine tunes, but having this need to sort of like mix data sets together, and If you've ever looked at all the various different datasets available on HuggingFace, they all have various different prompt formats, and, it's sort of a nightmare, and then I think the other piece is if you've ever tried to fine tune, at least Back then probably the ecosystem's a little better now.[00:05:54] Everybody required that you say, alright, you put your hyperparameters as command line arguments. And so it's always like, well, I now have to go copy and paste my previous thing and to change things out. And I really wanted it. to be in a YAML file because it was more portable and reproducible.[00:06:09] So I was doing that and then the QLOR paper came out. Tim Dettmer announced that and then somebody looked it up for me yesterday and it's like between that announcement it took us seven days to get that integrated into Axolotl, right? Which is like, it's not. I wouldn't say it's really fast, but in a manner that, is in a, a reusable framework, I think it was quite the accomplishment then.[00:06:33] And so we started, picking up traction with people there. And then it's just been building models, and then just iterating what my needs are. So, yeah. Excellent. Yeah. I[00:06:44] Alex Volkov: want to ask, for folks who are listening who never heard of Axolotl, now do you describe how you got there?[00:06:49] Can you, how do you summarize this for folks who maybe haven't fine tuned anything. They know about open source LLM exists, they maybe know like LLAML, what's XLR for somebody who doesn't know. I've never heard of a data set curation[00:07:01] Wing Lian: creation before. We sort of have to take a step back and understand that, when you've got these language models, you have what I think most people refer to as like base models, also known as like foundational models, right?[00:07:15] Where some benefactor, whether it's Meta or Mistral or whoever, has gone and spent all this money. To train these models on huge corpuses of text, right? And these, these corpuses, they're generally good across lots of different things, but they're really good at just saying, talking on and on and on, but they're not good at, following instructions or having chats or anything like that.[00:07:40] So, when you think about fine tuning, it's like Saying, all right, we have this really sort of good generalized, text completion thing, and I want to turn it into something that I can talk to or have, follow instructions. So, I think fine tuning is probably best defined in like that.[00:07:58] swyx: Okay, got it.[00:07:59] And we actually[00:08:01] What is finetuning?[00:08:01] swyx: Do want to make sure that we have like an overall introduction to fine tuning for people because again like trying to make sure that we bring everyone along in this, in this journey. We already went into Loras and QLoras without explaining what[00:08:12] Wing Lian: they are. Oh yes, yes, sorry.[00:08:14] swyx: And so I will put things in my words and you can correct me as, as, as my I'll be the village idiot here.[00:08:21] So, so fine tuning is basically sort of grabbing an open source model off the shelf, and then basically doing further training on it with a custom dataset of your own. Primarily, people use it, think about it as fine tuning for JSON output, or fine tuning for a style of response. Let's say you wanted to tell jokes, or be funny, or be short, or whatever.[00:08:43] Just the open source AI community has really fine tuned in all sorts of different manner. I think we'll go over those those things now. Let's go over those things now, and then we'll talk about fine tuning methods.[00:08:52] Open Source Model Zoo[00:08:52] swyx: So there's a universe of people who fine tune stuff. Yesterday in your slides, you had, I'll just list some of these and then we'll maybe go through some of them, right?[00:08:59] So Technium is personally leading Open Hermes, which is I think the sort of premier model out of the news. news community. There's OpenOrca, which you had a hand in. News, the news research itself also has Capybara and Puffin and all the others. There's Pygmalion, which I've never messed with.[00:09:14] Eric Hartford, I am aware of his Uncensored Models and his Samantha Models. Disco Research with Disco LM. And then you personally have done Manticore, Minotaur, Jackalope, and Hippogriff. What should people know about all these names? Being part of AI Twitter is seeing all these things and going dude, I'm being DDoS'ed by all these things and I don't know how different they are.[00:09:32] What should people know? Yeah, so[00:09:34] Wing Lian: I think on a lot of these models, generally, we like to think of those as sort of general models, so If you think about it, what is GPT 4, what is Chad GPT? It's a good general model, and then, One of the services I think that OpenAI offers is like these fine tunings where you're a business and you have very specific business use cases and you might fine tune for that use case.[00:10:00] All of these models are really just general use case that you can then go and maybe Fine tune another lore over it for your use cases, but they tend to be good. With good being relative, it's open source. Open source AI is still sort of is infancy. So, good is, it's pretty reasonable.[00:10:18] It's probably still better than most, high schoolers at answering questions and being able to like figure things out and, and reasoning skills and math and those sorts of things, right?[00:10:27] swyx: And also as measured on the Hugging[00:10:29] Wing Lian: Face leaderboard. Yes, well, that's like a whole other discussion, right, there's a whole other, group of people who, and I, I mostly agree with them that, benchmarks can be, are pretty bogus these days, LM says, I think they published something recently where, even if you think the dataset's not contaminated, you can go and, find contamination And maybe we should step back and say what contamination is, right?[00:10:53] Benchmarks and Contamination[00:10:53] Wing Lian: So we have all of these data, when you go and do these benchmarks, there's a specific data set where there are these questions and usually it's multiple choice. And what can happen is, well, sometimes someone It puts the question, maybe maliciously, maybe accidentally, into the training dataset, and now the, the, your model knows how to answer the test questions really well, but it doesn't, it hasn't generalized the ability to actually do that[00:11:20] Alex Volkov: right.[00:11:21] We've seen some folks competitively announce models that are like the best at that leaderboard, but then it's, it's quite obvious that, In open source? Yeah, and in that leaderboard, for Hugging Face specific, I don't know if LMCs, if that had suffered, but we, there's been some models that seem to have been competitively trained and some leakage happened into their,[00:11:41] swyx: like, supposal.[00:11:43] I understand, once there's been a credible assertion, Hugging Face actually does take them down, right? Yeah, yeah,[00:11:48] Alex Volkov: which is really hard to know, right?[00:11:50] swyx: It's really hard to know, sometimes it's like a pure accident,[00:11:52] Alex Volkov: it's oh, oops. You're going through a mixer. I think, a responsible So acknowledgement, that this kind of happened to you is also important.[00:11:58] I saw LDJ from news research can acknowledge that. Because many of these datasets are collections of other datasets. There's a bunch of people are baking, basically. It's alchemy. Right. And so sometimes you don't know. Sometimes you pull an open source dataset and they announce, oh, you know what, actually, the MMLU benchmark which we used to Specifically identify models that did go into this data set, that then went into that data set.[00:12:22] So sometimes it's actually an accident and folks take it down. But I've seen some competitive folks who want to put their name out there because people are starting to notice which is the top[00:12:30] swyx: model. For those who want a fun take on this so the file one dataset. FindOne model from Microsoft was accused of being contaminated.[00:12:37] And I saw this joke paper that was fantastic. It was called, training on the test set is all you need. It's a super small model that just memorizes everything. It was fantastic. So yeah, contamination, I think we've actually covered it in a previous episode before. So we're good. But again, I want to give people a map into the open source AI model, the universe.[00:12:57] And Alex, you can also jump in here because you guys have spent a lot more time with them than I have. So, what should people know about Technium? What should people know about Noose? And then we can go down the list. Yeah,[00:13:05] Wing Lian: I think so. I think if we start with, Technium. When you talk to him, he's gonna say, I think, I think his response is that he wants to build GP4 on his laptop, right?[00:13:14] So, very, very good at building general models. I think with Noose, Noose Research, they're looking at more, sort of, More, more research focused things, like their Yarn models, I don't, I don't, they didn't actually train their, they have their own trainer for their Yarn models, but So they did not use Xlato for that one?[00:13:30] They didn't use that, but like Is that, you don't have support for it? I think we do support Yarn, I think, I'd have to double check that answer. Yeah, I'm just kind of curious what you can and cannot support, and Yeah, I mean, Yarn is supportable, it's basically, I think it's just replacing, I think, the rope part of that, so Yeah, not, not a big deal.[00:13:48] Yeah, it's not a big deal, it's just I haven't gotten to it, not enough people have asked, I think a lot of people have asked for other things, so it's just, squeaky wheel, right? I think at the end of the day, people are like building these data sets and I think if you sort of map things chronologically, these make more sense because it's like, how do we incrementally improve all of these models?[00:14:07] So a lot of these models are just incremental improvements over the last thing, right? Whether it is sort of through methods of how do we, how did we curate the data set? How did we improve the quality of the data set? So, you maybe LDJ talked about it right on I think for, for Capybara and Puffin, like how those, those were very specific dataset curation techniques that he works on.[00:14:29] The Case for Open Source AI[00:14:29] Alex Volkov: So there's, folks are doing this for dataset curation. Folks are doing this for skillset building as well. Definitely people understand that open source is like very important, especially after the, the, the, the, the march, the debacle, the OpenAI weekend that we all had. And people started noticing that even after developer day in OpenAI, the APIs went out.[00:14:48] And then after that, the whole leadership of the company is swiftly changed and people, there was worries about, you know. How can people continue building AI products based on these like shaky grounds that turned attention definitely to Technium at least in open RMS I started seeing this more and more on Twitter, but also other models and many companies They're gonna start with open AI just to get there quick, and then they they think about okay Maybe I don't want to share my knowledge.[00:15:13] Maybe I don't want to sign up for Microsoft. Maybe they will change their terms and conditions so What else is out there? They turned to other companies. Up until yesterday, Google was nowhere to be found. We've talked about Gemini a little bit before in a previous And you can tune in[00:15:26] swyx: to[00:15:26] Alex Volkov: Thursday Eye.[00:15:26] Yeah, you can tune in to Thursday Eye. We covered the Gemini release a little bit. And but many are turning into the open source community and seeing that Meta released and continues to release and commit to open source AI. Mistral came out and the model is way smaller than LLAMA and performs Significantly better.[00:15:43] People play with OpenRMS, which is currently techniums based, news researched, sourced, axolotl trained OpenRMS, I assume, right? And then they play with this and they see that, okay, this is like GPT 3. 5 quality. We had GPT 4. 5 birthday just a week ago. A week ago, a year ago, a week ago, we never, interacted with these models of this caliber.[00:16:04] And now there's one open source, one that's on my laptop, completely offline, that, I can continue improving for my use cases. So enterprises, companies are also noticing this. And the open source community folks are building the skill set, not only the data sets. They're building the actual kind of, here's how we're going to do this, with Axelotl, with these data sets.[00:16:21] The curation pieces. Now. Interesting. There's like recipes of curation. The actual model training is kind of a competitive thing where people go and compete on these leaderboards that we talked about, the LMC arena, and that recently added open air and recently added open chat and a bunch of other stuff that are super cool.[00:16:37] The hug and face open source leaderboard. And so there's a competitive aspect to this. There's the open source. Aspect to this, like Technium says, I want GPT 4 on my laptop. There's the, let me build a skill set that potentially turns into a company, like we saw with Noose. Noose just, started organizing, a bunch of people on Discord, and suddenly, they're announcing their company.[00:16:54] It's happening across all these modalities, and suddenly all these people who saw these green pastures and a fairly quick way to, hey, here's a cool online community I can, start doing cool stuff with. You mentioned the same in the beginning, right? Like, after your accident, what's cool, let me try this out.[00:17:08] Suddenly I start noticing that there's a significant movement of interest in enterprising companies into these areas. And, this skill set, these data sets, and this community is now very Very important, important enough to create an event which pulls in Andrei Karpathy from OpenAI to come and see what's new Jeremy Howard, like the event that we just talked about, people are flying over and this is just a meetup.[00:17:28] So, definitely, the community is buzzing right now and I think Axelot is a big piece as well.[00:17:34] Orca and OpenOrca[00:17:34] Wing Lian: Cool. Maybe we can talk about like Orca real quick, Orca, OpenOrca rather, I think there was a lot of buzz when, the first Orca paper came out. And just briefly, what is Orca? Yeah, Orca was basically having traces of like chain of thought reasoning, right?[00:17:48] So they go and they, they distill sort of GPT 4. They take, they take a sampling of data from the Flan dataset. Maybe we can like add some show notes in the Flan dataset. Yeah, but we've covered it. Okay, cool. Use GPT 4 to say, all right, explain this in a step by step reasoning, right?[00:18:06] And then you take that and you, they train the model and it showed, very good improvements across a lot of benchmarks. So OpenOrca was sort of the open reproduction of that since Microsoft Research never released that particular data set. And going back to sort of the Hugging Face leaderboard thing, those models did really well.[00:18:23] And then I think, so sort of the follow up to that was SlimOrca, right? I think Going into and building the OpenOrca dataset, we never really went in and, validated the actual answers that GPT 4 gave us, so what we did was one from OpenChat actually cross referenced the original Flan, the original Flan response, the human responses, the correct answers with the dataset, and then I went and took it and sent all of, both of them to GPT 4 and said, is this answer mostly correct, right?[00:18:54] Yeah. And then we were able to filter the dataset from, At least of the GPT 4 only answers from like 800, 000 to like 500, 000 answers or rows and then, and then retrain the model and it had the same performance as the original model to within I think, 0. 1 percent here about, and 30 percent less data.[00:19:13] So, yeah. Okay.[00:19:15] swyx: Interesting. So, I mean, there's, there's so much there that I want to highlight, but yeah. Orca is interesting. I do want people to know about it. Putting chain of thought into the data set like it's just makes a ton of sense one thing I think it would be helpful for people to scope thing these things out is how much data are we talking about when when you When people are fine tuning and then how much time or resources or money does it take to train to fine[00:19:36] Wing Lian: tune?[00:19:37] Yeah, so I think there's a little bit of overlap there with sort of like fine tuning techniques, but let's say Orca and I think even Hermes, they're both relatively large data sets like 10 billion tokens. Yeah. So large data sets being or the original Orca was, or the original open Orca was 800,000 rows.[00:19:55] I believe it was somewhere in the ballpark of like a gigabyte of data, of gigabyte, of text data. And I, I don't. I believe, Hermes was, is like a quarter million rows of data, I don't know the actual byte size on that particular one. So, going and training a, let's, let's say everybody's training 7 billion Mistral right now, right?[00:20:15] So, to tri I, I believe to fine tune 7 billion Mistral on, let's say, 8 A6000s, which have 48 gigabytes of VRAM, I believe, It takes about 40 hours, so 40, and then that's, depending on where you get your compute, 40 times 6, so it's like 500 to fine tune that model, so, and, and that's assuming you get it right the first time, right?[00:20:44] So, you know.[00:20:45] swyx: Is, is that something that X. Lotto handles, like, getting it right the first[00:20:48] Wing Lian: time? If you talk to anybody, it's like you've probably tried at least three or four runs or experiments to like find the right hyperparameters. And after a while you sort of have a feel for like which, where you need your hyperparameters to be.[00:21:04] Usually you might do like a partial training run, do some benchmark. So I guess for Al Farouk, whether you're going by his. This is Jeremy, he's, his actual name, or his twitter handle. He released the Dharma dataset, which is basically a subset of all the benchmarks. And Axolotl actually supports, you know taking that subset and then just running many benchmarks across your model every time you're doing an evaluation so you can sort of like see sort of relative it's not going to be the actual benchmark score, but you can get ideas alright, is this benchmark improving, is this benchmark decreasing, based on, you know Wait,[00:21:39] swyx: why don't you run the full benchmark?[00:21:41] What, what, what The[00:21:42] Wing Lian: full benchmarks take Take a long time. Significant, yeah, significant amount of time. Yeah. And Okay, so that's like[00:21:48] swyx: mini MMLU. Yeah. Like,[00:21:49] Wing Lian: mini BigBench or whatever. Yep, exactly.[00:21:51] Alex Volkov: It's really cool. We, when I joined Web2Masters just recently, and one of the things that I try to do is hey I'm not, I'm a software engineer by trade, I don't have an MLE background, But I joined a company that does primarily MLE, and I wanted to learn from the community, Because a lot of the open source community, they use weights and biases, And the benchmark that you said that Pharrell did, remind me of the name, sorry.[00:22:13] Dharma? Dharma, yeah, yeah. So Luigi showed me how Dharma shows inside the dashboard. In Wi and Biases dashboard and so you can actually kinda see the trending run and then you can see per each kind of iteration or, or epoch or you can see the model improving trending so you can on top of everything else.[00:22:29] The wi and biases gives like hyper parameter tracking, which like you, you started with common line and that's really hard to like remember. Also the Dharma data set, like the quick, the mini orca mini, you mini many different things. It's pretty cool to like visualize them as well. And I, I heard that he's working on a new version of, of Dharma, so Dharma 2, et cetera.[00:22:47] So hopefully, hopefully we'll see that soon, but definitely it's hard, right? You start this training around, it said like 40, 50 hours. Sometimes, sometimes it's like your SSHing into this machine. You, you start a process, you send it with God and you just go about your day, collecting data sets, and then you have to return.[00:23:04] And the whole process of instrumentation of this is still a little bit like squeaky but definitely. Tuning performance, or like grabbing performance in the middle of this, like with Dharma and some other tools, is very helpful to know that you're not wasting precious resources going somewhere you shouldn't go.[00:23:21] Yeah.[00:23:22] swyx: Yeah. Very cool. Maybe I'll, I'll, before we go into like sort of more details on fine tuning stuff, I just wanted to round out the rest of the Excel autoverse. There's, there's still Eric Hartford stuff. I don't know if you want to talk about Pygmalion, Disco, anything that you know about[00:23:35] Wing Lian: those, those things.[00:23:36] DiscoLM and Model Stacking[00:23:36] Wing Lian: Yeah, I think like one of the, definitely one of the more interesting ones was like the Disco 120b, right? Yeah, I know nothing about it. Yeah. So, so. Alpen from Pygmalion AI, right, so they, so Pygmalion is a sort of a, it's, it's, they have their own community, a lot of it is based around, roleplay models, those sorts of things, and Alpen, like, put together, merged together Llama270B, so, and Alpen, like, put together, merged together Llama270B, so, I don't remember how he stacked them together, whether he merged the layers in between. There's a whole, there's a whole toolkit for that by Charles Goddard, where you can like take a single model and like stack them together or multiple models merge.[00:24:18] That's like a whole other talk and a whole other tool set, but was able to create this 120. Billion parameter model out of a LAMA two 70 B. And then I believe the, yeah, disco is a fine tune of, of the, the, the sort of the base one 20 B is, I believe Goliath one 20 B. So, and, and what are the[00:24:37] swyx: headline results that people should know about[00:24:39] Wing Lian: disco?[00:24:39] I think for the headline results, I, I've, I haven't played with it personally because it's. It's a very large model and there's a lot of GPU, right? But, like, from what I've heard anecdotally, it performs really well. The responses are very good. Even with, like, just, even the base model is a lot better than, Llama70b.[00:24:57] So, and we, I think generally everybody's like, we would all love to fine tune Llama70b, but it's just, it's so much, it's so much memory, so much compute, right?[00:25:07] Datasets and Evals over Models[00:25:07] Wing Lian: I[00:25:07] Alex Volkov: want to touch on this point because the interesting thing That comes up out of being in this ecosphere and being friends with open source folks, tracking week to week state of the art performance on different models.[00:25:19] First of all, a lot of the stuff that the folks do a couple of weeks ago, and then something like Mistral comes out, and a lot of the stuff back then, Doesn't technically make sense anymore. Like the artifacts of that work, the actual artifacts, they don't no longer make sense. They're like lower on the on, on the hug and face leaderboard or lower on LM CS leaderboard.[00:25:36] But some of the techniques that people use, definitely the datasets. The datasets keep traveling, right? So open airmen, for example, is the dataset. The tum cleaned up for only. Open sourceable data that previously was just Hermes. And that, it was previously used to train Lama. And then once Mistral came out, it was used to train Mistral.[00:25:54] And then it became significantly better on the 7b base Mistral. So the data sets keep traveling, keep getting better a little bit here and there. And so the techniques improve as well. It looks like both things are simultaneously true. The artifacts of a month and a half ago. The, the actual models themselves, it's great the hug and face has them, because not every company can keep up with the next weeks', oh, I, I'll install this model instead, sell this model instead.[00:26:19] But the, the techniques and the, the dataset keep improving as we go further, and I think that's really cool. However, the outcome of this is that for a long time. For many, many people, including us, that we do this every week. We literally talk with people who release these models every week. It's really hard to know.[00:26:36] So, there's a few aspects of this. One, I think, like you said, the bigger model, the 70B models, you actually have to have somebody like Perplexity, for example, giving you access to the 70B really fast. Or you have to, like, Actually, find some compute, and it's expensive, especially for the bigger models. For example Falcon 180B came out, like the hugest open source model.[00:26:56] How do you evaluate this if you can't run it? Nobody liked it. It's really, so first of all, nobody liked it, but secondly, only the people who were able to find compute enough to run inference on this, they only had like, I can't run this on my laptop, and so that's why it's much easier, something like OpenRMS 7 to be, 7B, it's much easier, because you can run this on your MacBook.[00:27:14] It's much easier to evaluate. It's much easier to figure out the vibes, right? Everybody talks about the vibes as an evaluation check. If you're plugged in enough, if you follow the right people, if they say pretty much the same things all independently, then you run into a problem of whether they're repeating, and their stochastic parents are repeating the same thing, or they actually evaluated themselves.[00:27:31] Yeah, you never know. But, you never know, but like, I think on a large enough scale on Twitter, you start getting the feel. And we all know that like, OpenRMS is one of the top performing models, benchmarks, but also vibes. And I just wanted to highlight this vibes checks thing because you can have the benchmarks, you can have the evaluations, they potentially have contamination in them, potentially they not necessarily tell you the whole story because some models are good on benchmarks, but then you talk to them, they're not super helpful.[00:28:00] And I think it's a combination of the benchmarks, the leaderboards, the chatbot, because LMSys, remember, their ranking is not only based on benchmarks, it's also people playing with their arena stuff. People actually like humans, like, get two answers. I think they completely ignore benchmarks. Yeah, and then They only do ELO.[00:28:18] Oh, they do ELO completely, right? So that, for example, is just like people playing with both models and say, Hey, I prefer this one, I prefer that one. But also there's like some selection bias. The type of people who will go to LMCs to play with the models, they're a little bit specific in terms of like who they are.[00:28:33] It's very interesting. There's so many models. People are doing this in this way, that way. Some people are doing this for academic rigor only to test out new ideas. Some people are actually doing this like the Intel fine tunes of Mistral. Intel wanted to come out and show that their hardware approach is possible, Mistral, etc.[00:28:51] And it's really hard to know, like, what to pick, what to use. And especially on the bigger models, like you said, like the Llama 70B, the Falcon 180B. It's really because, like, who has the compute to validate those? So I would mention that, like, use with caution. Like, go and research and see if the biggest model that just released was actually worth the tokens and the money you spend on it.[00:29:12] To try and, if you're a business, to integrate it.[00:29:15] Distilling from GPT4[00:29:15] swyx: Since you said use of caution, I'll bring in one issue that has always been in the back of my mind whenever I look at the entire universe of open source AI models, which is that 95 percent of the data is derived from GPC 4, correct?[00:29:30] Which technically you can't use for commercial licenses,[00:29:34] Wing Lian: right?[00:29:35] swyx: What is the community's stance on this kind of stuff?[00:29:40] Wing Lian: I think from the community stance, like I feel like a lot of us are just experimenting, so for us, it's like, we're not going and building a product that we're trying to sell, right?[00:29:49] We're just building a product because we think it's interesting and we want to use it in our day to day lives, whether or not we try and integrate it. Personal use, yeah. Yeah, personal use, so like, as long as we're not selling it, yeah, it's fine. But[00:30:01] swyx: like, I as a company cannot just take OpenHermes and start serving[00:30:05] Alex Volkov: it and make money on it.[00:30:06] OpenHermes you can. Because the opening of OpenHermes, I think, is a clean up. That did after the regular Hermes, please folks, check your licenses before you listen to podcasts and say, Hey, I will tell you though, you could say the same thing about OpenAI. You could say the same thing kind of makes sense, where OpenAI or StabilityAI trains their diffusion model on a bunch of pictures on the internet, and then the court kind of doesn't strike down Sarah Silverman, I think, or somebody else, who came and said, hey, this has my work in it, because of the way how it processes, and the model eventually builds this knowledge into the model, and then it doesn't actually reproduce one to one what happened in the dataset.[00:30:45] You could claim the same thing for open source. Like, we're using And by we, I mean the, the open source community that I like happily report on uses GPT 4 to rank, for example, which is the better answer you, you, that's how you build one, one type of data set, right? Or DPO or something like this, you, you basically generate data set of like a question and four answers, for example, and then you go to GPT 4 and say, Hey, smartest model in the world right now, up to Gemini Ultra, that we should mention as well.[00:31:11] Which one of those choices is better? But the choices themselves are not necessarily written with GPT 4. Some of them may be, so there's like full syntactic datasets. But there's also, datasets are just ranked with GPT 4. But they're actually generated with a sillier model, or like the less important model.[00:31:25] The lines are very blurry as to what type of stuff is possible or not possible. And again, when you use this model that's up on Hug Face, the license says you can use this. OpenAI is not going to come after you, the user. If anything, OpenAI will try to say, hey, let's prevent this, this type of thing happening, and the brain, but I honestly don't think that they could know even, not that it makes it okay, it's just like, They also kind of do this with the Internet's archive, and also, I think that some of it is for use.[00:31:55] You use models to help you augment tasks, which is what GPT 4 lets you do.[00:32:00] swyx: Yeah, the worst thing that OpenAI can do is just kick you off OpenAI. That's because it's only enforced in the terms of service.[00:32:05] Alex Volkov: Sure, but just like to make sure, to clarify who they're going to kick out, they could kick out like News, for example, if news are abusing their service, a user of the open source, fully Apache 2 open source, for example, They won't get kicked out if they use both, just because they use both.[00:32:22] I don't believe so. I don't think OpenAI has a claim for that.[00:32:25] swyx: Well, we're not lawyers, but I just want to mention it for people to know it's an issue.[00:32:30] Wing Lian: And one of the things, like, I talked to someone recently, and I think that they also are like interested in it, but also to the point of like, right, if I use a model trained on data, using GPT for data, But I use that model to then regenerate new data.[00:32:46] Is that model, is that data okay? So like you start going down this whole rabbit hole. So yeah. All right.[00:32:53] swyx: Fantastic. Cool. Well, I think that's roughly highlights most of the open source universe. You also have your own models. Do you want to shout out any one of them? Yeah.[00:33:01] Wing Lian: I mean, I think like, I think Early on, Manicore got a lot of love.[00:33:04] I think it was mostly popular in, like, the roleplay communities. It was, it tended to be pretty truthful. It tended to be, like, have relatively good answers, depending on who you ask, right? But, I think for me, it was just, Releasing models was a way to try and, like, continue to build out the product, figure out what I needed to put into the product, how do I make it faster, and, if you've got to, like, go and debug your product, you may as well have it do something useful.[00:33:29] Awesome. So, yeah.[00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ[00:33:31] swyx: Okay, and then maybe we'll talk about just fine tuning techniques. So this is going to be a little bit more technical than just talking about model names and datasets. So we started off talking about LoRa, QLoRa. I just learned from your readme there's ReLoRa. Which I've never heard about.[00:33:45] Could you maybe talk about, like, just parameter efficient fine tuning that whole, that[00:33:50] Wing Lian: whole journey, like, what people should know. Yeah, so with parameter efficient fine tuning, I think the popular ones, again, being, let's, we'll start with lore, right? So, usually what you do is you freeze all the layers on your base, on the base model, and then you, at the same time, you sort of introduce additional Oh, this is tight.[00:34:08] No. You introduce, another set of layers over it, and then you train those, and it is done in a way that is mathematically possible, particularly with LORs that you can, then you, you, When you, when you train the model, you, you run your inputs through the base model, whose weights are frozen, but you, then you also run it through the additional weights, and then at the end you combine the weights, and then, and then, or you combine the weights to get your outputs, and then at the end, and when you're done training, you're left with this other set of weights, right, that are completely independent, and And then from that, what you can do is, some person smarter than I figured out, well, oh, they've done it in such a way that now I can merge these weights back into the original model without changing the architecture of the model, right?[00:35:03] So, so, that tends to be, like, the go to, and You're training much fewer parameters so that when you do that, yes, you still need to have all of the original weights, but you have a smaller gradient, you have a smaller optimizer state, and you're just training less weights, so you can tend to train those models on, like, much smaller GPUs.[00:35:27] swyx: Yeah. And it's roughly like, what I've seen, what I've seen out there is roughly like 1 percent the number of parameters that you're trading. Yeah, that sounds about right. Which is that much cheaper. So Axelotl supports full fine tune, LoRa, QLoRa,[00:35:40] Wing Lian: Q. Yes. So, so QLoRa is, is very similar to LoRa. The paper was, if I remember correctly, the paper was Rather, traditionally, most people who did Loras were, were, they were quant, they were putting the model weights in 8 bit, and then fine tune, parameter efficient fine tuning over the Lora weights, and then with QLora, they were quantizing all of those, they were then quantizing the weights down to 4 bit, right, and then I believe they were also training on all of the linear layers in the model.[00:36:15] And then with ReLore, that was an interesting paper, and then, I think, like, it got implemented. Some people in the community tried it, tried it out, and it showed that it didn't really have the impact that the paper indicated that it would. And from what I was told recently, that they re I guess they re released something for Relora, like, a few weeks ago, and that it's possibly better.[00:36:44] I personally haven't had the time. What was the[00:36:46] swyx: main difference,[00:36:47] Wing Lian: apart from quantization? I don't know. Okay. What was the main difference, sorry?[00:36:49] swyx: Apart from quantization, right? Like,[00:36:50] Wing Lian: Qlora's thing was, like, we'll just drop off some bits. With Relora, what they did was, you would go through, you would define some number of steps that you would train, like, your Lora with, or your Qlora.[00:37:01] Like, you could do Like, ReqLore, if you really wanted to, you would, you would train your LoRa for some number of steps, And then you would merge those weights into your base model, and then you would start over. So by starting, so, then by starting over, The optimizer has to find, like, sort of, re optimize again, and find what's the best direction to move in, and then do it all again, and then merge it in, do it all again, and theoretically, according to the paper, doing ReLore, you can do parameter efficient fine tuning, but still have sort of, like, the performance gains of doing a full fine tuning, so.[00:37:38] swyx: Yeah, and[00:37:39] Wing Lian: GPTQ? And GPTQ, so it's, I think with GPTQ, it's very similar to, more similar to QLore, where you're, it's mostly a quantization of the weights down to like 4 bit, where GPTQ is a very, is a specific methodology or implementation of quantization, so. Got it.[00:37:57] Alex Volkov: Wang, for, for folks who use Axolotl, your users, some people who maybe, Want to try it out?[00:38:03] And do they need to know the differences? Do they need to know the implementation details of QLora versus ReLora? Or is it okay for them to just know that Axolotl is the place that already integrated them? And if that's true, if that's all they need to know, how do they choose which method to use? Yeah,[00:38:22] Wing Lian: so I think like, I think most people aren't going to be using ReLora.[00:38:25] I think most people are going to be using either Lora or QLora. And I think they should have it. They should have an understanding of why they might want to use one over the other. Most people will say that with Qlora, the quality of the final model is not quite as good as like if you were to do a LoRa or a full fine tune, right?[00:38:44] Just because, you've quantized these down, so your accuracy is probably a little off, and so that by the time you've done the Qlora, you're not moving the weights how you would on a full fine tune with the full parameter weights.[00:38:56] Interesting.[00:38:57] swyx: Okay, cool. For people who are more interested, obviously, read the papers. I just wanted to give people, like, a high level overview of what these things are. And you've done people a service by making it easy for people to try it out. I'm going to, I'm going to also ask a question which I know to be wrong, but I'm curious because I get asked this all the time.[00:39:15] What is the difference between all these kinds of fine tunes[00:39:17] Wing Lian: and RLHF? Okay, between all of these sorts of fine tunes and RLHF. So all of these sorts of fine tunes are based, are, ideally, this, they are taking knowledge that the base model already knows about, and presenting it in a way to the model that you're having the model answer like, Use what it already knows to sort of answer in a particular way, whether it's, you're extracting general knowledge, a particular task, right?[00:39:44] Instruct, tune, chat, those sorts of things. And then generally with RLHF, so what is, let's go back, what is it? Reinforcement Learning with Human Feedback. So if we start with the human feedback part, What you're doing is you generally have, you have like a given prompt and then you, maybe you have one, maybe you have two, I think, like if you look at with Starling, you have like up to what, seven different, seven different possible responses, and you're sort of ranking those responses on, on some sort of metric, right, whether the metric is how much I, I might like that answer versus or I think with like starling is like how how how helpful was the answer how accurate was the answer how toxic was the answer those sorts of things on some sort of scale right and then using that to go back and like sort of Take a model and nudge it in the direction of giving that feedback, to be able to answer questions based on those preferences.[00:40:42] swyx: Yeah, so you can apply, and is it commutative? Can you apply fine tuning after and onto an RLHF model? Or should the RLHF apply, come in afterwards,[00:40:54] Wing Lian: after the fine tune? Um, I, yeah, I don't know that there's There's been enough research for one way or another, like, I don't know.[00:41:02] That's a question that's been asked on Discord. Yeah, like, I definitely would say I don't know the answer. Go and try it and report back to me and let me know so I can answer for the next guy.[00:41:10] swyx: It's shocking how much is still unknown about all these things. Well, I mean, that's what research is for, right?[00:41:16] Wing Lian: So actually I, I think I saw on the top of a leaderboard, it was a, it was a mytral base model, and they didn't actually fine tune it. They, or they, they just did RLH, they did like an RLHF fine tune on it using like, I don't, I don't recall which dataset, but it was like, and it benchmarked really well.[00:41:37] But yeah, you'd have to go and look at it. But, so it is interesting, like going back to that, it's like. Traditionally, most people will fine tune the model and then do like a DPO, PPO, some sort of reinforcement learning over that, but that particular model was, it seemed like they skipped like the supervised fine tuning or Scott.[00:41:55] Axolotl vs HF Transformers[00:41:55] swyx: Cool. One thing I did also want to comment about is the overall, like, landscape, competitive landscape, I don't know. Hugging Face Transformers, I think, has a PFT module.[00:42:05] Wing Lian: Yeah, yeah, the PEFT, the Parameter Efficient Fine Tuning, yep. Is that a competitor to you? No, no, so we actually use it. We're just a wrapper over sort of, sort of the HuggingFace stuff.[00:42:15] So, so that is their own sort of module where They have, taken the responsibility or yeah, the responsibility of like where you're doing these parameter efficient fine tuning methods and just sort of like, it is in that particular package where transformers is mostly responsible for sort of like the modeling code and, and the trainer, right.[00:42:35] And then sort of, there's an integration between the two and, there's like a variety of other fine tuning packages, I think like TRL, TRLX, that's the stability AI one. Yeah, I think TRL likes the stability, yeah, Carper, and TRL is a hugging face trainer. Even that one's just another wrapper over, over the transformers library and the path library, right?[00:43:00] But what we do is we have taken sort of those, yes, we've We also use that, but we also have more validation, right? So, there are some of us who have done enough fine tunes where like, Oh, this and this just don't go together, right? But most people don't know that, so like Example?[00:43:19] Like, people want to One and one doesn't go together. I don't have an example offhand, but if you turn this knob and this knob, right? You would think, all right, maybe this will work, but you don't know until you try. And then by the time you find out it doesn't work, it's like maybe five minutes later, it's failed.[00:43:34] It's failed in the middle of training or it's failed during the evaluation step. And you're like, ah, so we've, we've added a lot of, we've added a lot more validation in it. So that like, when you've, you've created your configuration, you run it through and now you say. The validation code says this is probably not right or probably not what you don't, not what you want.[00:43:52] So are you like a, you[00:43:53] swyx: do some linting of your YAML file?[00:43:56] Wing Lian: There, I guess you could call it linting, it's sort of like Is there a set of rules out[00:44:00] swyx: there somewhere? Yeah, there's a set of rules in there. That's amazing, you should write documentation like This rule is because, this user at this time, like, ran into this bug and that's what we invested in.[00:44:10] It's like a good collection[00:44:11] Wing Lian: of knowledge. Yeah, it is, and I guess like, if you really wanted to, like, figure it out, I guess you could, like, git blame everything, and But, yeah, it's, so, I think that's always a useful thing, it's like Because people want to experiment but they don't, people will get frustrated when you've experiment, you're experimenting and it breaks and you don't know why or you know why and you've just gone down the rabbit hole, right?[00:44:37] So, so I think that's one of the big features that's, that I think I find important because it's It prevents you from doing things you probably shouldn't have, and it, and sometimes we will let you do those things, but we'll try and warn, warn you that you've done that.[00:44:50] I[00:44:51] Alex Volkov: have a follow up question on this, actually, because yesterday we hung out to this open source event, and I spent time by you a couple times, like when people told you, oh, XLR, I use XLR, it's super cool, and then the first thing you asked is, like, immediately, like, what can we improve?[00:45:04] And yes, from multiple folks, and I think we talked about this a little bit, where there's It's a developer tool. It's like a machine learning slash developer tool. Your purpose in this is to help and keep people, as much as possible, like, Hey, here's the best set of things that you can use right now. The bear libraries are, or the bear trainer, for example, is a bear trainer.[00:45:28] And also, maybe we should talk about how fast you're implementing these things. So you mentioned the first implementation took a week or so. Now there's a core maintainer group, right? There's like, features are landing, like Qlora, for example. Neftune, I don't know if that's one example of something that people potentially said that it's going to be cool, and then eventually, like, one of those things that didn't really shake out, like, people quickly tested this out.[00:45:48] So, there's a ton of Wait, Neftune is cancelled? I don't know if it's fully canceled, but based on vibes, I heard that it's not that great. So like, but the whole point that I'm trying to make with Neftune as well is that being existing in the community of like XLR or like, I don't know, even following the, the GitHub options or following the Discord, it's a fairly good way to like, learn these, Kind of gut feelings that you just, you just said, right?[00:46:14] Like where this, maybe this knob, that knob doesn't work. Some of these are not written down. Some of these are like tribal knowledge that passes from place to place. Axel is like a great collection of many of them. And so, do you get That back also from community of folks who just use, like, how do you know who uses this?[00:46:30] I think that's still an issue, like, knowing if they trained with XLR or should they add this to things? Talk about, how do you get feedback and how else you should get feedback?[00:46:38] Wing Lian: Yeah, I mean, most of the feedback comes from the Discord, so people come in and , they don't get a training running, they run into, like, obscure errors or, errors that That's a lot of things that maybe, maybe as a product we could catch, but like, there's a lot of things that at some point we need to go and do and it's just on the list somewhere.[00:46:58] Right that's why when people come up, I'm like, what, what were your pain points? Because like, as a developer tool, if you're not happy with it, or you come in and in the first, Takes you 30 minutes and you're still not happy. You leave the tool and you may, you might move on maybe to a better tool, maybe to, one with less frustration, but it may not be as good, right?[00:47:17] So I'm trying to like, figure out, all right, how can I reduce all this frustration? Because like for me, I use it every day for the most part, right? And so I am blind to that, right? Mm-Hmm. . Mm-Hmm. . I just know, I, I go do this, this, and this. It pretty much mostly works, right? But, so I don't have sort of that, alright, that learning curve that other people are seeing and don't understand their pain points.[00:47:40] Yeah,[00:47:40] Alex Volkov: you don't have the The ability to onboard yourself as a new user completely new to the whole paradigm to like get into the doors of like, Oh, no, I don't even know how to like ask about this problem or error.[00:47:53] swyx: Cool. The last few things I wanted to cover was also just the more advanced stuff that you covered yesterday.[00:48:00] 20x efficiency with StackLlama and Multipack[00:48:00] swyx: So I'll just, caution this as like, yeah, this is more advanced. But you mentioned Stackllama and Multipack. What are they[00:48:06] Wing Lian: and what should people know? Yeah, so, so, Stack Llama was, that paper came out, so Stack Llama I think was like, two, two, two separate, two separate concepts that they announced, so the first one was They being hugging face.[00:48:20] Yeah, sorry, yes, they being hugging face, so the first one being sort of like, this idea of packing, like some packing sequences together, so like, if we think about training data, right, your training data is, let's say, to keep the math easy, let's say your training data is 500, We, we, we, we will use the terminology words.[00:48:39] Let's say your training data is 500 words long, and let's say your, your context length, you know how much data your, that your model can accept is like, or that you want feed into your model. It's, let's say, we won't use tokens again, we'll we'll use it is it's 4,000 tokens, right? So if you're training at 4K Con or four 4,000 4K contacts and you're only using 500 of it, you're sitting like with the other 1500.[00:49:05] 3, 500 words that you're not using, right? And typically that's either filled with these PAD tokens, so I think I made the analogy last night that it's like having sort of like a glass here you fill it up with a shot of liquor and then you're and that's your training data and then you just fill it up with more water and those are your PAD tokens and it's just, it doesn't do much, right?[00:49:27] It's still the same thing, but you still have to go through all of that to go through all your training data. And then, so what Stack Llama showed was you could just sort of take your training data, append the next row of training data until you filled that entire 4k context, so in this example, right, with 500 words to 4k, that's 8 rows of training data.[00:49:48] But, the problem with that is, is that with a lot of these transformer models, they're very much relying on attention, right? So, like, if you now have this sequence of words that now, in order for the, the model has seen all of these other words before, right? And then it sees another set of words, another set of words, but it's learning everything in context of all the words that it's seen before.[00:50:13] We haven't corrected the attention for that. And just real quickly, since I said that that paper was two concepts, the other one was, I believe it was like a reinforcement learning, but outside the scope of this. So going from that, I implemented that early on because I was like, Oh, wow, this is really great.[00:50:29] And. Yes, because it saves you a bunch of time, but the trade off is a little bit of accuracy, ultimately, but it still did pretty well. I think when I did Manicore, I think it used sort of that concept from Stack Llama of just sort of appending these sequences together, right? And then sort of the next evolution of that is Multipack, right?[00:50:51] So, there was a separate paper on that, it was, I believe it was referenced, it got referenced in the Orca paper, where you could, you could properly mask those out using like a, I think it was like a lower block triangular attention mask, and then sort of, so, So, there's that. I did try implementing that, manually recreating that mask, but then one from the OpenChat, so he was helping with OpenOrca as well, and he had done an implementation of Multipack, and where he used FlashAttention, so FlashAttention So that was released by TreeDAO, and it was this huge performance gain.[00:51:35] Everybody uses it now, even the Transformers library now, they've taken all of these, like, people are taking all of these models and sort of like, making it compatible with FlashAttention. But in Flash Tension, there is one particular implementation that lets you say, Well, I'm sending you all of these sequences like you would in Stack Llama, But let me send you another, another, Set of information about, this is where this set of sequences is, this is where the second set of sequences is.[00:52:06] So like, if it was like, 500 words long, and you stacked them all together, you would just send it a row of information that was like, 0, 500, 1000, 1500, etc, etc, out to 4000. And it would know, alright, I need to break this up, and then run the forward pass with it. And then it would be able to, and it was much more, much more performant.[00:52:29] And I think you end up seeing like 10x, 20x improvements over sort of, I mean, I think FlashAttention was like a 2x improvement, and then adding that with the Multipack, you start to see like, depending on, how much data you have, up to like a 20x improvement sometimes. 20x. 20x. Wow. Yeah.[00:52:48] And I only know the 20x because I, like, before last night, I was like, I re ran the alpaca, I looked up the alpaca paper because it was like, I just need a frame of reference where somebody did it, and I think they used eight A100s for three hours, and they said it cost them 100. I don't, I don't think eight A100s cost, I don't know how much it costs right now.[00:53:14] But I ended up rerunning it. Usually a dollar an hour, right? Yeah, so eight. The cheapest is like a[00:53:18] Alex Volkov: dollar, a dollar an hour for one.[00:53:20] Wing Lian: Yeah, so that's still like 24, 25. But maybe if you're going on Azure, maybe it's like, maybe it's 100 on Azure. I mean, it used to be more expensive, like, a year ago.[00:53:31] Yeah, and then, so I re ran it with sort of like, I turned on all of the optimizations just to see what it would be. And like, and usually Multipack is the biggest optimization, so Multipack with Flash Detention. And it, I think I spun it up on 8 L40s, and it ran, and I didn't let it run all the way through, I just grabbed the time, the estimated completion time, and it was like 30 minutes, so it would have cost like 4 or 5 to run the entire, like, reproduce the alpaca paper, right?[00:54:00] Which is crazy. It's crazy. 20x,[00:54:02] Alex Volkov: yeah. I want to ask about, like, you said you turned on all the optimization. Is that the yaml file with xlodl, you just go and like check off, like, I want this, I want that? Yeah, yeah,[00:54:10] Wing Lian: so there's like one particular yaml file in there, That, there's one particular YAML file in there that's like, it's under examples, llama2, fft, optimize.[00:54:20] So, I think someone had created one where they just turned, they put in all of the optimizations and turned them on. I mean, it actually, it does run, which is like, sort of surprising sometimes, because sometimes, you optimize this, optimize this, and sometimes they just don't work together, but, yeah.[00:54:36] Just turn the knobs on, and like, fine tuning should really just be that easy, right? I just want to flip the knob and move on with my life and not figure out how to implement it.[00:54:47] Tri Dao and Mamba[00:54:47] Alex Volkov: Specifically, the guy behind FlashAttention came up with something new. You want to talk about this a little bit? You want to briefly cover Mamba?[00:54:53] Yeah, let's talk about Mamba. Let's talk about Mamba. So, what is Mamba?[00:54:57] Wing Lian: Oh, gosh. I
Interview by Haze / mike_tall We recently linked with Memphis artist Mac Critter for an exclusive “Off The Porch” interview! During our sit down he talked about working on his next album, been known all over Memphis, the culture in North Memphis, credits being consistent helping him getting out of the hood, jumping off the porch when he was 12, learning how to move while in the streets, starting to rap when he was 14, signing his first deal when he was 18, his work ethic while recording, reveals he has 400 unreleased songs, “Torture” being one of the first music videos to blow up for him, dropping a project every month last year, knowing Double 0 since they were very young, reveals how his deal with Gucci came about, his reaction to receiving his 1017 chain, recording “Dog” with Wop, his life changing after the deal, explains how he deals with the hate, his new single “High Risk”, his new music videos for “Lay Low” & “Nightmare”, what to expect from his next album, his growth as an artist, staying hungry & motivated, not getting uncomfortable, his next video, having 4 kids, fatherhood, shares advice for the youth, and much more! Learn more about your ad choices. Visit megaphone.fm/adchoices
#SabiNation, today we talk about HAPPINESS. Wetin dey make pipo happy and wetin dey make some pipo no dey eva happy? How we fit take arrange oursef to live happy life?... #HowUnaSeeAmPodcast #FemiAndMaaziTo leave us message, you fit go on top our social media platforms;Facebook: How Una See Am PodcastInstagram: @officialhusapodcastTwitter: @HusaPodcastYouTube: How Una See Am PodcastUna fit also call our number +1 (240) 459-8365 and leave comment for voicemail, or use dis link https://wa.me/qr/ZYXR5W3OAUFWL1 add us to WhatsApp and send voice-note. We wan hear from una.**Music Credits:UNITY by HUSAPodcast, YellowFela ProductionAfrobeat MixPhoto by Lay Low from Pexels
Lay low
Club Lay Low they provide Merchandise, apparel, and California compliant cannabis flower, vaporizers, concentrates, edibles, CBD, hookah, and also just released their new Fun Stik this year, a cannabis infused straw that disintegrates into any beverage.See omnystudio.com/listener for privacy information.
An exculpatory witness has emerged in the case against Daniel Penny, the Marine Subway hero that's facing manslaughter charges for rescuing passengers from crazed and violent Jordan Neely. The witness, who apparently filled out a police report, claimed Penny did all he could to avoid Jordan Neely until it was impossible to do so because he began threatening passengers. She quotes Neely as saying “I would kill a motherf-er. I don't care. I'll take a bullet. I'll go to jail.”The witness also claims that several passengers took video of the incidents and threats leading up to Penny subduing Neely and she hopes they come forward. She was shocked to find out Penny was being charged by Alvin Bragg, who had to have known that Jordan Neely had been placed on the top 50 list of homeless people in NYC. WATCH AND SUBSCRIBE TO OUR YOUTUBE CHANNEL https://www.youtube.com/@carljacksonshowandblog More: www.TheCarljacksonshow.com Facebook: https://www.facebook.com/thecarljacksonshow Twitter: https://twitter.com/carljacksonshow Parler: https://parler.com/carljacksonshowSee omnystudio.com/listener for privacy information.
An exculpatory witness has emerged in the case against Daniel Penny, the Marine Subway hero that's facing manslaughter charges for rescuing passengers from crazed and violent Jordan Neely. The witness, who apparently filled out a police report, claimed Penny did all he could to avoid Jordan Neely until it was impossible to do so because he began threatening passengers. She quotes Neely as saying “I would kill a motherf-er. I don't care. I'll take a bullet. I'll go to jail.”The witness also claims that several passengers took video of the incidents and threats leading up to Penny subduing Neely and she hopes they come forward. She was shocked to find out Penny was being charged by Alvin Bragg, who had to have known that Jordan Neely had been placed on the top 50 list of homeless people in NYC. WATCH AND SUBSCRIBE TO OUR YOUTUBE CHANNEL https://www.youtube.com/@carljacksonshowandblog More: www.TheCarljacksonshow.com Facebook: https://www.facebook.com/thecarljacksonshow Twitter: https://twitter.com/carljacksonshow Parler: https://parler.com/carljacksonshowSee omnystudio.com/listener for privacy information.
Beach Vacation - "Lay Low" a 2023 single on Z Tapes. Take some time off with the dreamy vibes of Beach Vacation. The latest single from this Seattle-based pop project of songwriter Tabor Rupp will call to mind bands on the Captured Tracks Records roster like Beach Fossils and Wild Nothing. Rupp told the music blog kid with a vinyl that today's Song of the Day is about “wanting to leave behind everything – specifically the things and people that are stressors and forget about them forever.” Sounds like a vacation to me. Read the full story at KEXP.orgSupport the show: https://www.kexp.org/donateSee omnystudio.com/listener for privacy information.
This episode of No Tracers is a captivating one that takes you on an exciting journey through the city of London from the unique perspective of Toby, AKA TW Visions. Join me as I interview TWVisions, a talented rooftopper based in London, and hear about his adventures scaling some of the city's tallest buildings. From the breathtaking views to the dangers and challenges of the sport, TWVisions shares their experiences and insights with us. Get ready to explore London's skyline, and beyond, like never before in "Live High, Lay Low with TWVisions. - Give Toby a follow: https://www.instagram.com/tw.visions/?hl=en - Follow me: http://instagram.com/no.tracers Tiktok: https://www.tiktok.com/@notracers?language=en Personal IG: http://instagram.com/kenagonio Twitter: http://twitter.com/KEnagonio I'm now a Death Peddler for Liquid Death Water, which means you get 10% off your order: http://liquiddeath.com/discount/JUSTTHELETTERK Read my urbex blog: http://notracers.com Pick up my book: http://notracers.com/shop --- Send in a voice message: https://podcasters.spotify.com/pod/show/notracers/message Support this podcast: https://podcasters.spotify.com/pod/show/notracers/support
Pharmacy Radio 079 February 2023 Welcome to episode 79 of Pharmacy Radio. I have got a fantastic episode for you featuring a one hour guest mix by DJ Jordan from Berlin. He has put together an amazing mix of melodic and acid techno that will have you on your feet and going crazy. In the first hour, I will be featuring the best in progressive house, techno and psy trance. Let's get right into the mix with a brilliant remix by Argy of Tiesto's latest track titled Lay Low out now on Musical Freedom. First Hour: Christopher Lawrence Tiesto - Lay Low (Argy Remix) - Musical Freedom Deciduous - Planet 9 - Astral Records Q.U.A.K.E, Martin Magal - Desire for Life - Astral Records Andrewboy - Last Samurai - Siona Records Julian Jordan - The Bass (Pretty Pink Extended Remix) - STMPD RCRDS Track of the Month Cosmic Boys - Miracle - Legend Gabriel Moraes - Ground Circuit - IbogaTech Ritmo - Distorted - Iboga Records Liquid Soul & Emok & Martin Vice - Elsewhere E-Clip - One Consciousness - Iboga Records Arhetip - Eunoia - TechSafari records Beyond - Science of Time - Digital Om Featured Artist Guest Mix: DJ Jordan DJ Jordan - Spirit (Marie Vaunt Remix) - soon on Ithica Nico Rivera - We shall never surrender - IAMT Red Galexis - Magnitude (Clap Codex Remix) - Alaula DJ Jordan - Tranceformer - soon on Sonaxx DJ Jordan - Starlink - soon on Black Snake Dankers - Nextasy - Crypt DJ Jordan - Flying High - soon on Animarum Black DJ Jordan - Come with me - soon on Autektone MyDarkSide - Sandstorm - Reload DJ Jordan - New Horizon - soon on HQN Ibai Alonso - Liberatus - Mask Akki - Inside my Head - 1605
Liv.e - "Wild Animals" from the 2023 album Girl In The Half Pearl on In Real Life Music. Psychedelic soulstress Liv.e returns with her sophomore full-length Girl in the Half Pearl, out February 10 via In Real Life. The Dallas-born, Los Angeles-based singer/producer found inspiration while experimenting with live performance at her residency last year at London's Laylow. Today's Song of the Day was co-written and produced with John Carroll Kirby and Solomonphonic, and is accompanied by a video directed by herself. “I really love the process of coming up with a vision and doing my best to ensure that it will come out just as it was in my imagination,” Liv.e shared in a press release. “I tend to use almost all my practices as another way to strengthen my trust and belief in myself. The concept is just based on the release of letting go old ‘people pleasing' habits that I tended to act on in the past a lot. A depiction of gaining the strength & courage to choose myself every time.” Read the full story at KEXP.org Support the show: https://www.kexp.org/donateSee omnystudio.com/listener for privacy information.
ON-AIR! This is #PRR543 by Nicky Romero. We're kicking off the year with many brand new bangers by the likes of Armin van Buuren, Dannic, D.O.D, Futuristic Polar Bears and many more. Timmo Hendriks & Lindequist return to Protocol Recordings with their tune “On My Mind”! Tracklist: 1. Tiësto - Lay Low 2. Nicky Romero & Stadiumx - Harmony (Stadiumx Remix) 3. Low Blow - Over The Sun (Club Mix) 4. Control Room & Prayer Handz - MOVIN' (Club Mix) 5. Protocol Spotlight: Timmo Hendriks & Lindequist - On My Mind 6. Daniel Portman - Night & Day 7. Jack Wins ft. Franky - Have It All (George Z Remix) 8. Throwback Track: Nicky Romero & Vicetone vs. Swedish House Mafia vs. Bastille - Let Me Feel vs. For You vs. Pompeii (Whaler Mashup) 9. D.O.D - Set Me Free 10. Blackcode feat. David Allen - Falling 11. Armin van Buuren & Matoma feat. Teddy Swims - Easy To Love 12. Kosling & TWICE ft. Jordan Grace - All These Years 13. Jengi - Bel Mercy (Dannic vs Flexx & Kuli Bootleg) 14. Din & Vic - Only you 15. Futuristic Polar Bears - Where The Love Is 16. Edson Faiolli - So 17. Blasé - Lose My Breath
On this edition of The Homestretch on ESPN Kansas City, Sterling Holmes is joined by Joshua Brisco to discuss the Kansas City Chiefs... and their superfans. Plus, This or That!See omnystudio.com/listener for privacy information.