POPULARITY
Categories
The boys at castle comms lean into drama and talk some deserved shit about his holiness Dr lupo, his band of 40 yr old dbag defenders, the H3 situation, and the death of nuisance streaming
みなさんこんにちは。ツキイチ配信よりも若干ペースの遅い火曜鶏です。今回は主に6月にヒデキたちチキンハートがやるH3ってイベントの話と、5/11に浜松町で開催される春のタヌキ祭りに出店するよって話をしたんじゃなかったかな。おぼろげです。まあエニウェイですけど浜松町でお会いしましょうね、ってことです。まさしが後でサマってくれたらコピペしますと書いてそのまま放置しがちなどについて喋ってたような気がします。お便りフォーム、Twitter、Instagramでハッシュタグ #火曜鶏でコメントや質問、オススメのマンガ、冒頭挨拶ネタをお待ちしています。
Welcome back to Heroes Three podcast. This week we discuss Zhang Yimou's wuxia follow up to Hero, 2004's House of Flying Daggers starring Andy Lau, Takeshi Kaneshiro, and Zhang Ziyi!Full cast and credits at HKMDBCheck out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoFind us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comFull episode with gifs here!
Welcome back to Heroes Three podcast. This week we discuss Zhang Yimou's wuxia follow up to Hero, 2004's House of Flying Daggers starring Andy Lau, Takeshi Kaneshiro, and Zhang Ziyi!Full cast and credits at HKMDBCheck out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoFind us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comFull blogpost with gifs here!Timestamps(0:00) Intro(0:52) Why House of Flying Daggers(10:58) The cast and crew(18:15) Yimou's approach changing over time(27:08) Back of the VHS(27:44) Movie Discussion(1:11:31) Final thoughts(1:14:21) Plugs and Training for next week
Nini problem solving at Discord and Roblox, Todd's gift of 125GB, the V in LGBT, Alyssa gets a job, Keem's phone call, Phil's Nintendo Switch, H3's skull collection, Ian lowers his expectations, Kanye and Nick open up, Rekieta learns absolutely nothing, I didn't say anything wrong, and the choice between Fat and Jolly or Bald and ...
Discussing the 2025 Ryan Coogler film Sinners, the Content Cop drop on H3 by iDubbbz, and the future of Warner Brothers and Disney. Hosted on Acast. See acast.com/privacy for more information.
Suzanne heeft het gevoel dat ze verantwoording moet afleggen. Want ze is het niet meer eens met een paar statements die zijn afgelegd in de eerste afleveringen met het H3-netwerk. Zo staat ze niet meer achter het platgooien van de cyclus van jonge meisjes met ADHD of autisme door de anticonceptiepil door te slikken om te voorkomen dat ze last krijgen van hormoonschommelingen die hun klachten verergeren. En sowieso vindt ze de anticonceptiepil geen middel om hormonale stemmingsklachten op te lossen in de perimenopauze, omdat ze denkt dat wij het echte progesteron, dat je door de pil nauwelijks meer hebt, heel erg hard nodig hebben. En ondertussen worden wij volgepropt met synthetische orale shit en antidepressiva en adhd-medicatie. Ze zegt niet dat het niet helpt, maar ze denkt wel dat het anders kan. Beter.In haar zoektocht naar antwoorden luisterde ze naar de ADHD womens' welbeing podcast van Kate Mouryseff en daar was Adèle Wimsett te gast, een Engelse hormoonspecialist die gespecialiseerd is in de perimenopauze én de behandeling van ADHD-klachten. En zij doet dat met progesteron. Ook luisterde ze naar een webinar van de twee koninginnen van de progesteron: biochemicus Phyllis Bronson en apotheker Carol Peterson. Onder leiding van de ongeëvenaarde Jill Chmieleweski: Defending All Forms of Bioidentical Progesterone With Phyllis Bronson, PhD and Carol Petersen, RPh, CNP. Luister, en leer een hoop over de verschillen tussen bio-identiek en synthetisch progesteron.De podcast is in het Engels en met beeld. Op de Substack van Suzanne kun je het in video mét Nederlande ondertiteling terugkijken. Ook ontvang je als betalend lid - en dat kan al voor 5 euro per maand - het boekje De Perimenopauze Ontrafeld dat Suzanne schreef met huisarts Lotte van Dijk en dat antwoord geeft op al je vragen.Sponsor van deze week is Oslo Skin Lab. Gebruik de kortingscode ‘nietgek' op www.osloskinlab.nl. De kortingscode geeft recht op 60% korting op de 1e verpakking van het vrijblijvende membership van Oslo Skin Lab. Op alle volgende verpakkingen ontvangen members 30% korting. Geldig tot 31 december 2025. Het wetenschappelijk bewijs dat het collageen van The Solution werkt, vind je hier.Voor de uitgebreide shownotes, alle tips en achterliggend onderzoek én de video met ondertiteling kijk op Substack. Voor meer achtergrond en de uitgebreide shownotes, ga naar:suzannerethans.substack.comWil je Suzanne helpen in haar zoektocht naar het antwoord op de vraag waarom de ene vrouw omvalt in de perimenopauze en de andere niet? Steun dan haar DNA-analyse.Wil je adverteren in deze podcast? Stuur dan een mailtje naar adverteren@bienmedia.nl of ga naar www.bienmedia.nl. Hosted on Acast. See acast.com/privacy for more information.
This week on Heroes Three podcast we're in the thralls of preparing our VGM CON panel, so please enjoy this short discussion on Cause: The Birth of Hero and a little preview of our upcoming panel!Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoFind us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.com
Jean-Sébastien Giguère est aujourd'hui le chef d'un groupe qui opère plusieurs restaurants : La cabane du Coureur, Le restaurant Le Coureur des bois, le H3 dans l'hôtel Humanity et un comptoir Climat au Time Out à Montréal.Aujourd'hui nous allons commencer par parler du temps des sucres qui ne dure que 2 mois à la fin de l'hiver, début printemps. Pourquoi ouvrir un restaurant si éphémère? Jean-Sébastien va nous parler de son équipe, de comment faire du sirop d'érable à partir de l'eau d'érable, de comment s'occuper d'une érablière...Nous avons beaucoup de questions à lui poser car le chef est devenu au fil du temps un entrepreneur alors nous en profitons pour lui poser des questions sur sa vision, sur sa cuisine. A t'il toujours voulu être un chef? On utilise le temps du podcast pour lui poser toutes les questions qui nous passent par la tête et vous allez voir on ne se gêne pas.On est dans le jus, le podcast de la restauration au Québec.
Welcome back to Heroes Three podcast! This week we discuss the worldwide hit film Hero from 2002, directed by Zhang Yimou and starring Jet Li, Maggie Cheung, Tony Leung, Zhang Ziyi, and Donnie Yen!Full cast and credits at HKMDBCheck out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoFind us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comFull blogpost with gifs here!
Our guest is CHARLIE WATKINS, CEO of Dominion Enterprises, longtime business leader and C Suite Executive, and author of the book Throw No Rocks. Charlie has been married for over 45 years and spends much of his time involved in his church and mentoring men. And also serves on the board of ICM, a global church building and development organization. We discuss leadership, mentoring men, forgiveness, the power of putting others first, and much more. Plus check out the list of 20 Leadership Podcasts to check out. Make sure to visit http://h3leadership.com to access the list and all the show notes. Thanks again to our partners for this episode: - OPEN DOORS - Get the latest FREE 2025 World Watch List and prayer guide at http://opendoorsus.org. Since they were founded by Brother Andrew nearly 70 years ago, Open Doors has become the world's largest on-the-ground network working to strengthen persecuted Christians. 380 million Christians face high levels of persecution for their faith- 1 in 7 worldwide. Download the FREE World Watch List now at http://opendoorsus.org. Plus the Prayer Guide gives you the World Watch List, real stories of persecuted Christians, profiles of all 50 countries and specific ways to pray for each one. Again, visit http://opendoorsus.org. And GENEROUS COFFEE – visit http://generouscoffee.com. You are going to drink coffee anyways, so why not make it life changing coffee? Generous is a specialty coffee roaster who donates 100% of profits to non-profits focused on fighting human injustice around the world. Whether a church, business, or event looking to order coffee in bulk, or simply ordering for your home visit http://generouscoffee.com and use Rate Code H3 for 10% off your entire order. You'll be drinking some of the highest quality craft roasted coffee in the world. Again, visit http://generouscoffee.com and use code H3 for 10% off your entire order.
Welcome to the latest episode of Heroes Three podcast! This week we are continuing our look at the works of Zhang Yimou with 1994's To Live, starring Ge You and Gong Li.Find us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comCheck out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoTimestamps(0:00) Intro(1:01) Why To Live(15:05) Technical aspects of the film(20:55) Novel talk(22:46) A bit about puppets(25:34) Movie talk(41:50) Tanget on 90s alt comedy(42:41) Movie talk 2(1:07:59) Book ending and final thoughts(1:13:41) Plugs and Training for next week
Our guest is JENNI CATRON, a USA Today Best-Selling Author of her latest book, Culture Matters. Jenni is the founder and CEO of 4SIGHT Group, a team and organizational culture expert and consultant, host of the LeadCulture Podcast, and speaker and thought leader. We discuss why culture matters, how to change culture from the inside, trends in leadership, and much more. Plus check out the list of 25 Newsletters to Subscribe to. Make sure to visit http://h3leadership.com to access the list all the show notes. Thanks again to our partners for this episode: - OPEN DOORS - Get the latest FREE 2025 World Watch List and prayer guide at http://opendoorsus.org. Since they were founded by Brother Andrew nearly 70 years ago, Open Doors has become the world's largest on-the-ground network working to strengthen persecuted Christians. 380 million Christians face high levels of persecution for their faith- 1 in 7 worldwide. Download the FREE World Watch List now at http://opendoorsus.org. Plus the Prayer Guide gives you the World Watch List, real stories of persecuted Christians, profiles of all 50 countries and specific ways to pray for each one. Again, visit http://opendoorsus.org. And GENEROUS COFFEE – visit http://generouscoffee.com. You are going to drink coffee anyways, so why not make it life changing coffee? Generous is a specialty coffee roaster who donates 100% of profits to non-profits focused on fighting human injustice around the world. Whether a church, business, or event looking to order coffee in bulk, or simply ordering for your home visit http://generouscoffee.com and use Rate Code H3 for 10% off your entire order. You'll be drinking some of the highest quality craft roasted coffee in the world. Again, visit http://generouscoffee.com and use code H3 for 10% off your entire order.
On this episode of the pod, Anthony and the crew discuss Hila Klein's new tattoo and the H3 crew doxxing a random Instagram user. They're also discussing the Shredder controversy AND tasting Maestro's Choco-Tubes. Also on the menu: Shrek, Amouranth, and the Trump/Zelenski meeting!
In today's episode, Casey will discuss 5 primary points to consider for your own SEO strategy.1. Keyword Research & StrategyWhat It Is:Keyword research involves finding the right words and phrases that potential customers are using to search for pest control services. This includes short-tail (broad) and long-tail (specific) keywords.How It Applies to a Pest Control Company:Identifying local keywords like "pest control near me," "exterminator in [city]," or "termite treatment in [location]"Researching service-specific terms like "bed bug removal Cincinnati" or "rodent exclusion Ann Arbor"Using keyword variations like "affordable pest control," "same-day exterminator," etc.Leveraging tools like Google Keyword Planner or SEMRush to find low-competition, high-intent keywords.What It Is:On-page SEO refers to optimizing website elements like page titles, meta descriptions, content, URLs, and images for search engines.How It Applies to a Pest Control Company:Creating optimized title tags (e.g., "Best Pest Control in Phoenix - Victory Pest Defense")Writing compelling meta descriptions that improve click-through rates (e.g., "Fast, reliable pest control services in Chandler, AZ. Call now for a free inspection!")Using header tags (H1, H2, H3) effectively with location-specific keywordsOptimizing images by adding descriptive alt text like "Bed bug extermination in Lexington, KY"Creating SEO-friendly URLs, such as:✅ www.example.com/ant-control-cincinnati❌ www.example.com/services1234What It Is:Local SEO focuses on optimizing your online presence for location-based searches. This includes Google Business Profile (GBP), local citations, and reviews.How It Applies to a Pest Control Company:Claiming and fully optimizing a Google Business Profile with correct business name, phone number, hours, and service areasAdding high-quality photos of technicians, vehicles, and completed jobsEncouraging positive customer reviews with follow-ups (e.g., “Thanks for using our service! We'd love to hear your feedback on Google.”)Ensuring NAP (Name, Address, Phone Number) consistency across all listings (Yelp, Angi, BBB, etc.)Creating location-based service pages (e.g., "Pest Control in Killeen, TX" or "Rodent Removal in Northern Kentucky")What It Is:Technical SEO involves optimizing the backend of the website to improve speed, mobile-friendliness, security, and crawlability.How It Applies to a Pest Control Company:Ensuring a fast-loading website (customers expect quick responses when dealing with pests!)Using mobile-responsive design (since most people search for pest control services from their phones)Implementing SSL security (HTTPS) for customer trust and SEO rankingCreating an XML sitemap and submitting it to Google for better indexingFixing broken links, duplicate content, and redirect errors that could hurt rankingsWhat It Is:Content marketing focuses on creating valuable content that educates potential customers and helps with ranking. Link building involves getting other reputable websites to link back to your site.Final ThoughtsBy applying these 5 SEO pillars, a local pest control company can rank higher in Google searches, attract more leads, and grow their customer base. A well-executed local SEO strategy combined with strong content marketing and technical SEO can significantly boost visibility and revenue.Please review us at Rhino Pest Control Marketing and interact with us to let us know how we can improve in 2025.Casey Lewiscasey@rhinopros.com(925) 464-8383Follow and subscribe at the following links:https://www.youtube.com/@RhinoPestControlMarketinghttps://www.facebook.com/rhinopestcontrolmarketingLeave us a review on Google: https://g.page/r/CT9-E84ypVI0EBM/review
Welcome back to Heroes Three podcast! This episode we begin an arc discussing the work of Zhang Yimou, beginning with Raise the Red Lantern (with a touch of Red Sorghum for comparison) from 1991 starring Gong Li.Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlitoFind us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comTimestamps(0:00) Intro(0:57) Why no Red Sorghum?(4:38) Zhang Yimou Background(7:51) Different kind of movie for Heroes Three(9:21) Matthew's limited Yimou experience(10:52) Gong Li and Red Sorghum(14:24) Movie Discussion(1:09:18) Final Thoughts(1:13:29) Plugs and Training for next week
Nuevamente, se espera un frío amanecer en la CDMX Limpieza y desazolve en 44 puntos de la alcaldía Iztapalapa Putin se muestra dispuesto a discutir con TrumpMás información en nuestro Podcast
In this episode we break down the 12 Keys to Being a Great Teammate. Whether the president, executive director or senior pastor, or leading from the middle of the organization, these keys will help you become a better teammate and strengthen the team you're part of. Plus check out the top weekly leadership links. Make sure to visit http://h3leadership.com to access the full list and all the show notes. Thanks again to our partners for this episode: SUBSPLASH – engage your congregation through Subsplash. Schedule your free demo at http://subsplash.com/brad. Subsplash is the platform made to help maximize your church's growth and engagement. The go to for mobile apps, messaging, and streaming, along with building websites, groups, giving and more, Subsplash puts today's most innovative church technology into your hands so you can focus completely on ministry. Visit http://subsplash.com/brad and join more than 20,000 churches and ministries who partner with Subsplash. Again, visit http://subsplash.com/brad to schedule a quick, no obligation demo. And GENEROUS COFFEE – visit http://generouscoffee.com. You are going to drink coffee anyways, so why not make it life changing coffee? Generous is a specialty coffee roaster who donates 100% of profits to non-profits focused on fighting human injustice around the world. Whether a church, business, or event looking to order coffee in bulk, or simply ordering for your home visit http://generouscoffee.com and use Rate Code H3 for 10% off your entire order. You'll be drinking some of the highest quality craft roasted coffee in the world. Again, visit http://generouscoffee.com and use code H3 for 10% off your entire order.
Welcome back to Heroes Three podcast! This episode we're warming up with another look at a handful of animated shorts.Ek Cup Chaha from 2019 by Sumit YempalleThe Boy Who Cheated Death from 2024 by Pipou Phuong NguyenGurren Lagann Parallel Works episode 8 from 2008 by You YoshinariOnimusha 3: Demon Siege intro cinematic from 2004 directed by Takashi Yamazaki, with fight choreography by Donnie Yen and Kenji TanigakiCheck out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Find us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.comTimestamps(0:00) Intro(01:05) The Heroes Three catch up(05:42) A Cup of Tea(11:37) The Boy Who Cheated Death(16:02) Gurren Lagann Parallel Works Ep 8(26:51) Onimusha 3 Intro(39:31) Plugs and training for next week
Sofi por fin vive su primera carne asada en CDMX, Fa hace recap de H3 y Hasan y ambas dan ideas para Galentine's y Valentine's on a budget
部落格圖文版 https://linshibi.com/?p=48579 1.根據日本國立感染症研究所的資料,1月27日到2月4日這一週全日本約5000間醫療機構通報的流感患者,平均每間是5.87人,比上一周的11.06幾乎減半,繼續下降。明顯從去年底最高峰的64.39人掉下來了。 2.以此推估全日本這週新增約19萬4千人因流感就醫,最高峰時曾一週258萬人。2024年9月2日到現在,這個流感季在日本估計因流感累計就醫總數估計達到971.7萬人。 3.有47個都道府縣相比於前週是減少的,很廣泛性的下降。當然還是有些區域流行狀況較高,目前最高的前幾名:山形縣(16.02)、新潟縣(14.94)、沖縄縣(13.32)、岩手縣(11.35)、宮城縣(9.44)、石川縣(9.27)、群馬縣(8.95)、青森縣(8.64)、富山縣(8.60)、長野縣(8.35)。 東京降到3.79人,大阪則是3.34人,早已經小於10這個注意報的標準。 4.這個感染症研究所的圖點進去還可以看每個都道府縣的細分區,細看哪些保健所區域是比較流行的。目前全日本超過警報標準的還有61個保健所,分布在27個都道府縣。而東京和大阪境內沒有任何一區超過注意報的標準,因此在大圖上是全白的。北海道則是在富良野,帯廣,釧路,中標津這幾個地方流行,但其他地方是白的。 https://reurl.cc/r3jNWy 5.約500間醫療機構通報的流感住院人數,本週是665人,比前週的1308人幾乎減半,從最高峰的5304人大幅下降。分布在各年齡層,但60歲以上佔了大宗,總計435人,占比約七成。 6.最近5週間(2025年第1週~第5週)流感分型:A型流感H1N1占了87%。H3亞型佔了10%,B型佔了4%。有日本專家根據往年經驗表示,要小心B型流感接下來可能再流行一波。雖然另外兩種比例似乎有增加,但那主要是因為H1N1大幅減少,還需要繼續觀察。 7.台灣的話,因為碰到過年多數門診休診數字可能有低估,下週二應該會有比較確定的趨勢可以和大家分享。疾管署4日表示,本(2025)年第4週(1/19-1/25)及第5週(1/26-2/1,春節期間)類流感門急診就診分別約16萬2千餘人次及9萬1千餘人次,其中第4週就診人次為近十個流感季最高。第5週因逢春節連假,多數門診休診,門急診就診人次為91,436人次,預期連假結束後就診人次將回升1-2週再逐漸下降。 8.台灣在本流感季自2024/10/1起截至2025/2/3累計重症667例,年齡層以65歲以上長者為多(占57%),死亡病例累計132例,確定病例及死亡病例均有9成以上未接種本季流感疫苗。 9.順帶一提自2024/9/1起截至2025/2/3,在台灣新冠併發重症累計467例,其中102例死亡,均以65歲以上長者及具慢性病史者為多,其中自2024/10/1起通報之確定及死亡病例未接種JN.1疫苗者均達97%以上。新聞只是沒有報,但新冠病毒並沒有離開我們。他一直都在。 我在日本的防疫小物 怎麼防流感? https://linshibi.com/?p=49073 旅平險海外突發疾病 請注意有無給付法定傳染病 https://linshibi.com/?p=48622 我把所有目前的優惠券都上傳到雲端硬碟了,方便大家一整包下載! https://reurl.cc/r9Ej24 分類別: 日本藥妝店必備優惠券一整包 https://reurl.cc/DjOqqd 日本電器 服飾 運動用品必備優惠券大集合 https://reurl.cc/OMZVa7 日本百貨公司 機場免稅店優惠券大集合 https://reurl.cc/Ren4DG 04b毒友獨家優惠專區 https://reurl.cc/XG1r67
Our guest is IAN MORGAN CRON, a Best-selling author of multiple books, including his most recent The Fix, which is based on the 12 steps program. Ian is also a speaker, Enneagram expert, host of the Typology Podcast, and founder of the Typology Institute. We discuss addiction, the power of the 12 step program, spiritual awakening, personal growth, and much more. Make sure to visit http://h3leadership.com to access the full list and all the show notes. Thanks again to our partners for this episode: CONVOY OF HOPE - Please donate to the LA Fires efforts and also Hurricane Helene and Milton relief effort and ongoing work at http://convoyofhope.org/donate. Convoy is my trusted partner for delivering food and relief by responding to disasters in the US and all around the world. Right now, Convoy of Hope is responding to the LA fires, along with devastation in the southeast US from Hurricane Helene and Milton, providing basic needs like food, hygiene supplies, medical supplies, blankets, bedding, clothing and more. All through partnering with local Churches. Join me and please support their incredible work. To donate visit http://convoyofhope.org/donate. And GENEROUS COFFEE – visit http://generouscoffee.com. You are going to drink coffee anyways, so why not make it life changing coffee? Generous is a specialty coffee roaster who donates 100% of profits to non-profits focused on fighting human injustice around the world. Whether a church, business, or event looking to order coffee in bulk, or simply ordering for your home visit http://generouscoffee.com and use Rate Code H3 for 10% off your entire order. You'll be drinking some of the highest quality craft roasted coffee in the world. Again, visit http://generouscoffee.com and use code H3 for 10% off your entire order.
It's the January Top Ten Leadership List Episode! Links and recommendations on music, books, podcasts and more. Plus check out the latest edition of the Young Influencers List, with 14 young leaders you should know. Make sure to visit http://h3leadership.com to access the full list and all the show notes. Thanks again to our partners for this episode: - OPEN DOORS - Get the latest FREE 2025 World Watch List and prayer guide at http://opendoorsus.org. Since they were founded by Brother Andrew nearly 70 years ago, Open Doors has become the world's largest on-the-ground network working to strengthen persecuted Christians. 380 million Christians face high levels of persecution for their faith- 1 in 7 worldwide. Download the FREE World Watch List now. Plus the Prayer Guide gives you the World Watch List, real stories of persecuted Christians, profiles of all 50 countries and specific ways to pray for each one. Again, visit http://opendoorsus.org. And GENEROUS COFFEE – visit http://generouscoffee.com. You are going to drink coffee anyways, so why not make it life changing coffee? Generous is a specialty coffee roaster who donates 100% of profits to non-profits focused on fighting human injustice around the world. Whether a church, business, or event looking to order coffee in bulk, or simply ordering for your home visit http://generouscoffee.com and use code H3 for 10% off your entire order. You'll be drinking some of the highest quality craft roasted coffee in the world. Again, visit http://generouscoffee.com and use code H3 for 10% off your entire order.
On this second LIVE episode, the DNW crew discuss an H3 fan's very long videos about Anthony, Anthony binging all 3 seasons of Dubai Bling, Elon and Trump's recent controversies, local drama between Gino Raidy and Comedian Nour Hajjar, Dubai Chocolate overload, and plenty more!
Anthony is joined by Nour to respond to Ethan Klein calling him a rat on a recent H3 episode. They're also discussing the election of a new Lebanese president, the LA fires, Lonerbox, and more!
Happy Holidays from Heroes Three! This week we are closing out 2024 with Marty's host pick, God of Gamblers III: Back to Shanghai directed by Wong Jing and starring Stephen Chow, Ng Man Tat, and Gong Li! Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Full cast and credits at HKMDB!Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Full blog post with gifs here! Timestamps (0:00) Intro (3:00) Production background (6:36) Carlos and Matthew Initial Thoughts (8:40) Hong Kong cross talk with hollywood (11:01) Movie Thoughts (16:05) Gong Li: Too Hot for Taiwan? (26:22) Back to the movie (58:40) Possible releases in the US (1:01:14) Ng Man-tat and Stephen Chow (1:03:10) Final thoughts and plugs
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe
Welcome back to Heroes Three! Host picks continue with Matthew's choice, and he brought the Bong Joon-ho monster film from 2006, The Host! Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Full cast and credits at IMDB! Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Making of The Host documentary Full blogpost with gifs here! Timestamps (0:00) Intro (1:35) Carlos's "The Host" history (2:58) Marty's "The Host" History (8:01) The strengths of this movie (13:05) The cast (16:29) Carlos plot recap (20:18) Movie discussion (54:30) Final Thoughts (56:06) Plugs and training for next week
Capture The Chaos - Grow Your Newborn and Family Photography Business
In this episode, Brittnie and Julie explore their personal and professional lives, sharing insights on balancing work, family, and self-care while exploring the critical role of website design and Search Engine Optimization (SEO) for small businesses. 1. Balancing Work, Family, and Personal Wellness Julie shares how she manages her pension consulting business, side hustle (PNW Designs), and family responsibilities with the support of her spouse. Brittnie discusses the challenges of running her planner business while raising a toddler. Both emphasize the importance of routines, clear expectations and self-care in maintaining balance. 2. Self-Care and Wellness Julie details her morning workout routine, walking habits, and passion for reading (73 books in 2024!). Brittnie discusses prioritizing health amidst hormone and anemia struggles, highlighting the need for optimal health solutions. 3. Website Design and SEO Basics Julie explains how header tags (H1, H2, H3) help Google understand website content and structure. They discuss the importance of targeting the right keywords to attract the ideal audience. Website platforms like WordPress, Squarespace, Wix, and Shopify are compared, and the pros and cons for small business owners are shared. 4. Content Longevity and Social Media Balance Brittnie recounts how a viral blog post drove traffic to her business for years, demonstrating the value of well-targeted, long-form content. They discuss balancing time-consuming social media tasks with more sustainable marketing strategies. 5. SEO Services and Resources Julie highlights her SEO audit and optimization services, offering small business owners actionable insights to improve their website visibility. She shares upcoming episodes of her podcast, Being Better Every Day, which features topics on SEO and Brittnie as a guest speaker. Connect with Julie Website Podcast Youtube Instagram SEO Monitoring Tools Annual Planning Workbook The Planner 2025 Planner Reveal: Watch it here! Grab The Capture The Chaos Planner. Use code CHAOS15 for 15% off your order Listen to more episodes of The Organized Creative Podcast! Listen on: Apple | Spotify | Google
今年3月、わずか5秒で自律破壊した小型ロケット「カイロス」。その再チャレンジが目前に迫っています。和歌山県串本町の発射場で前回も取材した科学記者に、これまでの経緯やロケットの仕組み、宇宙開発事情から観覧のコツまでたっぷり聞きました。※2024年11月25日に収録しました。 【関連リンク】前回爆発した民間ロケット、12月14日打ち上げへ 衛星載せ再挑戦https://www.asahi.com/articles/ASSB92CTQSB9PLBJ003M.html?iref=omny 「うまく行ってるロケットは変えるな」の金言 H3、開発の難しさはhttps://www.asahi.com/articles/ASS2G4RD7S28PLBJ001.html?iref=omny ロケット燃料の固体と液体、違いは? 機能や扱いやすさに特徴https://www.asahi.com/articles/ASS9L0S5BS9LPLBJ002M.html?iref=omny 【出演・スタッフ】石倉徹也(科学みらい部)MC・音源編集 下地達也 【朝ポキ情報】ご感想はおたよりフォーム → https://bit.ly/asapoki_otayori 番組カレンダー→ https://bit.ly/asapki_calendar 出演者名検索ツール→ https://bit.ly/asapoki_cast 最新情報はX(旧ツイッター) → https://bit.ly/asapoki_twitter 交流はコミュニティ → https://bit.ly/asapoki_community テロップ付きはYouTube → https://bit.ly/asapoki_youtube_ こぼれ話はメルマガ → https://bit.ly/asapoki_newsletter 全話あります公式サイト → https://bit.ly/asapoki_lp 広告ご検討の企業様は → http://t.asahi.com/asapokiguide メールはこちら → podcast@asahi.com See omnystudio.com/listener for privacy information.
rWotD Episode 2770: HIST1H2AE Welcome to Random Wiki of the Day, your journey through Wikipedia’s vast and varied content, one random article at a time.The random article for Tuesday, 3 December 2024 is HIST1H2AE.Histone H2A type 1-B/E is a protein that in humans is encoded by the HIST1H2AE gene.Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Nucleosomes consist of approximately 146 bp of DNA wrapped around a histone octamer composed of pairs of each of the four core histones (H2A, H2B, H3, and H4). The chromatin fiber is further compacted through the interaction of a linker histone, H1, with the DNA between the nucleosomes to form higher order chromatin structures. This gene is intronless and encodes a member of the histone H2A family. Transcripts from this gene lack polyA tails; instead, they contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6p22-p21.3.This recording reflects the Wikipedia text as of 00:57 UTC on Tuesday, 3 December 2024.For the full current version of the article, see HIST1H2AE on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm neural Amy.
Paul McDonald, host of Men at the Movies, invites listeners to support the podcast and its various platforms on Giving Tuesday. He highlights his video projects, including a documentary series about the impact of Jesus through the ministry of Heads, Hearts, and Hands in Uganda. McDonald expresses gratitude for the relationships built through the podcast and invites listeners to partner in its future endeavors. Key points • Podcast Expansion: Website, YouTube channel, Instagram, Facebook, and TikTok accounts for additional content. • Documentary Project: Creating a short documentary series based on the author's experiences in Kampala, Uganda, focusing on the impact of H3's work. • Podcast Focus: Exploring stories of transformation and revelation, similar to those found in beloved films, through the Men at the Movies podcast. • Call to Action: Seeking prayer, financial support, and partnerships to continue the podcast and documentary project. Links: - Patreon: patreon.com/menatthemovies - Cameo giving: menatthemovies.com/investors - Men at the Movies website: https://www.menatthemovies.com/ - Men at the Movies YouTube channel: https://www.youtube.com/@UC2xo9bvDbN4Z3BEx37AlRqw - Socials: Instagram: https://www.instagram.com/menatthemovies/ Facebook: https://www.facebook.com/menatthemovies TikTok: https://www.tiktok.com/@UC2xo9bvDbN4Z3BEx37AlRqw X (formerly Twitter): https://x.com/_menatthemovies C.S. Lewis Institute-Charlotte: https://www.cslewisinstitute.org/charlotte/ Carolina Outpost: https://carolinaoutpost.com/ Heads, Hearts, and Hands (H3): https://www.headsheartsandhands.org/ Subscribe to our YouTube channel (https://www.youtube.com/channel/UC2xo9bvDbN4Z3BEx37AlRqw?sub_confirmation=1) for bonus content. To dive into this content even more, visit our website: www.menatthemovies.com/podcast. You will find resources mentioned on the podcast, plus quotes and themes discussed. Find us on the socials: YouTube: www.youtube.com/@menatthemovies Facebook: www.facebook.com/menatthemovies Instagram: www.instagram.com/menatthemovies/ TikTok: www.tiktok.com/@menatthemovies Twitter: twitter.com/_menatthemovies If you would like to support our work (and get some behind-the-scenes perks), visit our Patreon page (www.patreon.com/menatthemovies). Get invites to livestreams, bonus episodes, even free merch. If you'd like to do a one-time contribution (a cameo appearance), visit www.menatthemovies.com/investors. Edited and mixed by Grayson Foster (graysonfoster.com) Logo and episode templates by Ian Johnston (ianhjohnston.com) Audio quotes performed by Britt Mooney, Paul McDonald, and Tim Willard, taken from Epic (written by John Eldredge) and Song of Albion (written by Stephen Lawhead). Southerly Change performed by Zane Dickinson, used under license from Shutterstock Links: MATM website: www.menatthemovies.com/podcast YouTube: www.youtube.com/@menatthemovies Spotify: open.spotify.com/show/50DiGvjrHatOFUfHc0H2wQ Apple pods: podcasts.apple.com/us/podcast/men-at-the-movies-podcast/id1543799477 Google pods: podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy80ODMwNThjL3BvZGNhc3QvcnNz
This week on the pod, Anthony is joined by Nour and Karine to discuss the recent ceasefire. They're also reacting to this week's H3 podcast where Ethan Klein invited known zionist LonerBox to discuss politics, but crew members AB, Lena, and Olivia didn't let them get away with their usual propaganda. We're also updating our Lebanese celebrity tier list!
Soy Solo - Historias honestas de liderazgo para ser feliz en el siglo XXI y más allá
En este episodio de CEO en Camiseta Diario, Leo Piccioli te invita a explorar el modelo de los tres horizontes: H1, H2 y H3. Aprende a mirar más allá de las urgencias del día a día y a diseñar tu futuro con visión estratégica. Descubre cómo pequeños ajustes, grandes saltos y disruptivas transformaciones pueden cambiar tu carrera, tu negocio y tu vida. En un mundo que parece acelerar cada vez más, pensar en el largo plazo es la ventaja que pocos se animan a aprovechar. ¿Te gustó? ¡Compártelo con alguien que lo necesite y suscríbete en ceoencamiseta.com/subscribe para más contenido que te hará crecer!
Welcome back to Heroes Three! Some host picks to close out the year and Carlos brings one of his all time favorites, Katsuhiro Otomo's Memories from 1995! Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Full cast and credits at Anime News Network! Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Full blog post with gifs here! Making Memories Documentary Read the Memories short by Otomo here! Animation Obsessive's great writing on Magnetic Rose here! Weird cel wobble from Fist of the North Star Timestamps (0:00) Intro (0:52) Why Memories (2:49) Carlos Memories memories (7:54) Marty's surprising Memories memories (11:27) Matthew's Memories first time (16:10) Production History (19:26) Based on a true story?? (20:53) Wrong info online (23:08) Magnetic Rose (37:11) Stink Bomb (52:54) Cannon Fodder (1:04:56) Final thoughts (1:09:51) Outro and training for next week
This week on the pod, Anthony, Nour, and Nadim discuss the election results, madness in Iran, and lots of crazy H3 vs Hasan updates!
This week, we take a detour to hear about Carlos's super fun trip to Japan! Ninja museum, boar encounters, and super grannies await! Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Find us online - https://linktr.ee/Heroes3PodcastEmail us! - heroes3podcast@gmail.com Full blogpost with pictures here!
Apple is due to spend about $1.5 billion to expand its iPhone direct to device services with Globalstar. 3D rocket company Relativity Space is struggling to raise much needed capital. China's Shenzhou-18 crew have returned to Earth after a six-month stay on the Tiangong space station, and more. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our weekly intelligence roundup, Signals and Space, and you'll never miss a beat. And be sure to follow T-Minus on LinkedIn and Instagram. T-Minus Guest Our guest today is Herb Baker, Author of “From Apollo to Artemis: Stories from my 50 years with NASA”. You can connect with Herb on LinkedIn, and learn more about his book at herbbaker.space Selected Reading Apple commits $1.5 billion to Globalstar for expanded iPhone satellite services Relativity Space Is Said to Face Cash Drain, Exploring Options - Bloomberg China space station crew returns to Earth after 6 months in space- AP News Japan launches military communications satellite on 4th flight of H3 rocket- Space Telesat Signs US$39 Million Contract with SatixFy to Develop and Deliver Landing Station Baseband Units for Telesat Lightspeed Network First Flight Of Mira II $7 billion project to create Australian military satellites axed amid defence spending review - ABC News ‘We are go for launch' approval granted for the world's most versatile rocket launch site France and Morocco Partner to Develop Pan-African Satellite Communication System - Space in Africa Pinnacle Private Ventures Leads $2.25M Seed Round for LINGO, Advancing STEM Workforce Development Teledyne to Acquire Micropac- Business Wire MDA Space Announces Appointment Of Guillaume Lavoie As Chief Financial Officer NASA's New Edition of Graphic Novel Features Europa Clipper T-Minus Crew Survey We want to hear from you! Please complete our 4 question survey. It'll help us get better and deliver you the most mission-critical space intel every day. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at space@n2k.com to request more info. Want to join us for an interview? Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal. T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Isaac Brodsky discusses the integration of H3, an open-source hierarchical hexagonal grid system, with DuckDB, an analytical SQL database, to enhance geospatial data analysis. This combination enables efficient querying and manipulation of diverse datasets in real-time. Highlights
Tyler Gillum, Head Coach of the team Savannah Bananas, shares stories that shaped his character, emphasizing the profound impact individuals can have on one another.Join us on this exciting double-header episode of the Dugout CEO!TOPICS FROM THIS EPISODE:Leadership and Vision at the Savannah Bananas: Discussion about the importance of leadership, particularly Jesse Cole's disciplined approach and commitment to the team's vision. Emphasis on leading by example and fostering a culture of creativity and uniqueness.Connecting with Fans and Breaking Down Barriers: Highlighting the team's focus on creating a personal connection with fans, including the H3 (high five, hug, handshake) approach. Breaking down the traditional barriers between players and fans to create a more engaging and inclusive atmosphere.Banana Land Experience and Entertainment: Describing the unique fan experience at Banana Land games, from the banana orientation to post-game celebrations. Emphasis on creating never-forget moments, connecting with fans during the game, and the team's dedication to entertainment.Real-life Moment with Bill Spaceman Lee: Discussing a challenging and unscripted moment during a game where Bill Spaceman Lee experienced a health scare. The organization's genuine concern, unity, and family culture were highlighted.Bet On Yourself and Positive Impact Goal: Tyler's personal goal is to positively impact 1 million people through baseball, education, and exercise. The "Bet On Yourself" initiative to raise money for baseball gloves for kids on tour emphasizes the importance of self-belief and pursuing one's passion.Connect with Tyler Gillum: https://www.linkedin.com/in/tyler-gillum-bb8317b7/https://thesavannahbananas.com/Join our Free Newsletter:Friday Focus Newsletter: Receive actional tips to remove yourself from the daily grind of your business.www.CaseyCavell.comFollow Casey on LinkedIn: https://www.linkedin.com/in/caseycavell/
Sina Kashuk is the co-founder & CEO of Fused, who wants to make iterating & deploying in Python faster with serverless computing. We break down what that actually means, why it matters and what data science workflows could look like over the next few years.This also isn't Sina's first company, a few years ago he started Unfolded.ai, focused on making visualisations for data scientists faster. The company was acquired by Foursquare in 2021.Sponsor: Beemaps by HivemapperGet access to high quality, fresh map data at https://beemaps.com/mindsUse promo code MINDS to get 50% off your API credits through Dec. 31 2024About SinaTwitterLinkedInShownotesNote: Links to books are Amazon Affiliate links. I earn a small commission if you buy any of these books.My blogpost joining the teamUber's H3 tiling gridFoursquare acquires UnfoldedAWS LambdaMy conversation with Ib GreenFused.ioBook & Podcast recommendationAwaken the Giant Within by Tony Robbins Affiliate LinkI'm pretty sure you can find Minds Behind Maps by yourself if you're hereTimestamps(00:00) - Intro (02:38) - Sponsor: Beemaps(03:55) - Hacking (06:07) - Fused.io(07:23) - Why run your algorithm in the cloud? (10:06) - Serverless computing (12:40) - Optimizing for iteration speed (18:52) - Breaking Fused into smaller parts (23:27) - "User Defined Functions: UDF" (31:08) - How do you make money? (31:56) - Why start companies? (42:41) - Convincing people to use your tools (49:44) - Speed isn't all: Train / Plane analogy (54:36) - Going beyond geospatial (57:33) - Building a team (59:54) - Podcast/book recommendation (01:01:11) - Building a Long Term Vision (01:06:59) - Support the podcast on PatreonSupport the podcast on PatreonMy TwitterPodcast TwitterRead Previous Issues of the NewsletterEdited by Peter XiongFind more of his work
Welcome back to Heroes Three! This week Summer of Ninja ENDS as we are joined by Leon from Hong Kong and Beyond to discuss the apex of 80s ninja cinema, Revenge of the Ninja from Sam Firstenberg and Cannon Films starring Sho Kosugi! Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Check out Leon and Shaz at Hong Kong & Beyond! YouTube & Podcast Full cast and credits at IMDB! Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Full blogpost with gifs here! Timestamps (0:00) Intro (0:55) Our history with Revenge of the Ninja (9:26) Production Info (18:42) Revenge of the Ninja Trailer (20:14) Movie Recap (1:00:13) An aside on nunchucks (1:08:52) Back to the Movie (1:25:04) Leon plugs (1:26:36) Our plugs
Welcome back to Heroes Three podcast! The Summer of Ninja continues as we take on another double feature, Chang Cheh's Five Element Ninjas from 1982 and The Super Ninja from 1984-ish. Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Full cast and credits at HKMDB & HKMDB! Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Full blogpost with gifs here! Timestamps (0:00) Intro (1:12) Why these two (4:56) 5 Element Ninjas Reputation (7:07) Chang Cheh and Shaw Brothers (11:07) Chu Ko Choreo (13:32) 5 Element Ninjas Discussion (35:51) Break Time (38:27) The Super Ninja (57:19) Plugs and Training for Next Week
The Summer of Ninja doesn't stop here at Heroes Three podcast. This episode we are very excited to be joined by The Fanatical Dragon to discuss the directorial debut of the legendary Corey Yuen, Ninja in the Dragon's Den from 1982 starring Conan Lee and Hiroyuki Sanada! Check out The Fanatical Dragon on Youtube! The Fanatical Dragon Patreon Check out some H3 art and merch! - https://www.teepublic.com/user/kf_carlito Full cast and credits at HKMDB! Find us online - https://linktr.ee/Heroes3Podcast Email us! - heroes3podcast@gmail.com Some great additional finds for the film at Vintage Ninja! Japanese Program Stills Part 1 and Part 2! Ninja Vinyl with a wonderful rip of Legend of a Ninja and it's B-side, Silver Moon! Full blog post with gifs here! Timestamps (0:00) Intro (1:07) Seasonal films, Conan Lee, and Ng See Yuen (11:16) Releases of the film (16:12) Spanish dubs and new dub crew (21:28) BotVHS (22:04) Movie Recap (1:37:54) Fanatical Dragon Plugs (1:40:16) Johnny saying really nice stuff about us (1:41:43) Our plugs
Get up to 60% off at https://www.babbel.com/SOP Keemstar joins the podcast to discuss ImAllexx, his beef with H3, and more.
On this episode of The H3 we stumble upon a WILD little true crime scandal involving a TikTok content house, two influencers, one cat, and a whole lot of accusations, intrigue, and drama! There are twists and turns, and the participants themselves wind up jumping in on the convo as well! Learn more about your ad choices. Visit megaphone.fm/adchoices
On this episode of The H3 Show The Try Guy's try war with H3 and Ethan retaliates against their vicious bullying. Also we check in on our boy Rudy and his coffee company, discuss Fidias' shocking victory in the EU parliament, and a whole lot more! Learn more about your ad choices. Visit megaphone.fm/adchoices
Fan favorite Oliver Tree returns to get a haircut with Logan, binge-eat 12 donuts & friend chicken, address SLIME lawsuit, referee Logan’s gender reveal party, dating Billie Eilish & Jojo Siwa, riding H3’s d**k, hit song ‘Miss You’ getting ripped off, breaking up Bobby Lee & Khalyla, becoming morbidly obese, Logan & Mike WALK OFF & more… SUBSCRIBE TO THE PODCAST ► https://www.youtube.com/impaulsive Use code LOGAN10 for 10% off tickets on SeatGeek https://seatgeek.onelink.me/RrnK/LOGAN10 *Up to $25 off Watch Previous (Patrick Mahomes & iShowSpeed On Winning 3 Super Bowls, Travis Kelce Relationship, Meeting Messi) ► https://www.youtube.com/watch?v=iN-Zp3ruiEQ&t=682s ADD US ON: INSTAGRAM: https://www.instagram.com/impaulsiveshow/ Timestamps: 0:00 Welcome Oliver Tree!
Today, we have the first in-studio appearance from one of our contestants in Jeff's Bach3lor! We show a video of their romantic date on a sunset swan boat ride, and then everyone debriefs. Ethan also discusses the ongoing situation with the H3 snark page, he debriefs the Fresh & Fit debate, and much more. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today we have some spicy new developments in the David Dobrik vs Jeff Wittek situation (David is being sued by State Farm). The crew also pitch their own designs for an H3 uniform, an update on the C-Man situation, a TikTok drama involving 2 women a boyfriend a sandwich and a stolen hoodie (mystery of the decade), and much more! Learn more about your ad choices. Visit megaphone.fm/adchoices