POPULARITY
Categories
SUMMARY: Matt gets bumped into while standing in line for ice cream, Paul deals with dog vomit without flipping out, and Jacob knows a guy who'll shoulder-check you inside a casino. Also, Matt and fellow magicians perform at a local charity event. Plus, live Scoop Mail tackles horse beans, cow eyes, stripping copper, and a Jock vs. Nerd Obelisk.
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn “The Head that Once was Crowned with Thorns” TLH 219 Readings: Isaiah 57:15, Acts 1:1-11, St. Mark 16:14-20 Hymn of the Day: “Dear Christians, One and All Rejoice” (The Augustana Service Book and Hymnal #35, LW 353, TLH 387) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “Draw Us To You” LW 153, TLH 215 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “A Hymn of Glory Let Us Sing” LW 149, TLH 212 “A Hymn of Glory Let Us Sing” LW 149 “O Christ, Our Hope” LW 151 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Ascension-Cover-5-29-2025-Online.pdf https://vimeo.com/1088420036?share=copy Picture: Ottheinrich Bible 1430 (II:59) Christ's Ascension Mark 16:15-20
Order of Matins, p. 208 Lutheran Worship Hymn “Our Father, Thou in Heaven Above” (The Augustana Service Book and Hymnal #34, LW 431, TLH 458) Office Hymn “Rise! To Arms! With Prayer Employ You” LW 303, TLH 444 Psalmody: Psalm 67, 104, 47 Readings: James 5:16-20, St. Luke 11:5-13 St. John 17:1-11 Sermon --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Rogation-Days-Cover-5-26-28-2025-Online.pdf https://vimeo.com/1088059289?share=copy
Order of Matins, p. 208 Lutheran Worship Hymn “Our Father, Thou in Heaven Above” (The Augustana Service Book and Hymnal #34, LW 431, TLH 458) Office Hymn “Prayer is the Soul's Sincere Desire” TLH 454 Psalmody: Psalm 67, 104, 47 Readings: James 5:16-20, St. Luke 11:5-13 1 Peter 4:1-17 Sermon --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Rogation-Days-Cover-5-26-28-2025-Online.pdf https://vimeo.com/1087535602?share=copy
Rogate Divine Service, May 25, 2025 at 10:15 AM Link to Live Stream Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn “Prayer Is the Soul's Sincere Desire” TLH #454 Readings: Jeremiah 29:11-14, James 1:22-27, St. John 16:23-33 Hymn of the Day: “Our Father, Thou in Heaven Above” (The Augustana Service Book and Hymnal #34, LW 431, TLH 458) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “O Living Bread from Heaven” LW 244, TLH 316 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “To God the Holy Spirit Let Us Pray” LW 155 “May We Your Precepts, Lord, Fulfil” LW 389 “O God, My Faithful God” LW 371, TLH 395 “Christians, While on Earth Abiding” LW 434 Closing Hymn: “We Give You But Your Own” LW 405 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Rogate-Cover-5-25-2025-Online.pdf https://vimeo.com/1085470315?share=copy Picture: Ottheinrich Bible 1430 (IV:22) Lazarus Raised John 11:1-44
Zijn we weer met een review! UFC 315 is geweest en wat een main event. Belal Muhammed nam het op tegen Jack Della Madalena. Niemand mag Belal Muhammed meer saai noemen, maar! Wel een nieuwe champ op WW en dat betekent dat we na UFC 315 een opening hebben voor Islam Makhachev om het op te gaan nemen tegen JDM voor de WW titel. Topuria vs Oliveira next op LW? Like, subscribe, gooi die 5 sterren op spotify! Voor al je Nederlandse MMA content, check Buiten de Kooi! Volg de socials:YT: https://www.youtube.com/@buitendekooi IG: www.instagram.com/buitendekooi TikTok: https://www.tiktok.com/@UCTLy5wrugZswxF0_AU0o6sQ
Liquid Weekly Podcast: Shopify Developers Talking Shopify Development
In this episode of the Liquid Weekly Podcast, hosts Karl Meisterheim and Taylor Page discuss a variety of personal and professional topics. They discuss rebranding efforts for Liquid Weekly and the importance of community engagement in their work. Eventually they get around to discussing various aspects of app development, including building a resource hub, the importance of internationalization, and the role of AI in enhancing productivity.*Episode Highlights** Taylor recently underwent eye surgery, improving his vision significantly.* Shopify development tools are evolving based on user feedback.* Monetization strategies for tools can enhance community engagement.* The hosts discuss the importance of work-life balance during summer.* Rebranding efforts for Liquid Weekly are underway to improve its image.* Internationalization is essential for reaching a global audience.* AI can significantly improve productivity in app development.* Client work is evolving towards subscription models for flexibility.* Understanding app development challenges is crucial for success.* Liquid and Shopify updates are vital for developers to stay informed.* Shopify's revenue share changes impact app developers significantly.*Timestamps** 08:04 Vision Transformation: The ICL Surgery* 13:59 Shopify Insights: Building Tools for Developers* 20:15 Community and Collaboration: The Shopify Developer Alliance* 24:22 Balancing Work and Family Life* 38:53 Rebranding and Community Engagement* 43:04 Developing a Public App* 47:05 The Gap in App Development Resources* 50:35 Evolving Freelance Work and Client Relationships* 53:22 The Impact of AI on Productivity* 57:34 Navigating AI Limitations in Development* 59:50 Shopify Developments and Liquid Hackathon* 01:05:08 Shopify App Developer Revenue Changes* 01:08:31 Updates on Shopify's POS and WebPixels API* 01:13:02 Picks of the Week and Closing Thoughts* 01:19:48 LW podcast intro video.mp4*Dev Changelog** Update to Shopify's app developer revenue share - https://shopify.dev/changelog/update-to-shopifys-app-developer-revenue-share* [action required] POS UI Extensions - Cart API: Customer fields removed from subscribable hook - https://shopify.dev/changelog/pos-ui-extensions-cart-api-customer-fields-removed-from-subscribable-hook* [action required] Web Pixels API: event.data.checkout.subtotalPrice.amount value change on the new /thank-you page and checkout events - https://shopify.dev/changelog/web-pixels-api-eventdatacheckoutsubtotalpriceamount-value-change-on-the-new-thank-you-page-and-checkout-events* Refund to Store Credit - https://shopify.dev/changelog/refund-to-store-credit* More automated checks for app review pre-submission page - https://shopify.dev/changelog/more-automated-checks-for-app-review-pre-submission-page * [action required] Payment apps can no longer be embedded in the Shopify admin - https://shopify.dev/changelog/payment-apps-can-no-longer-be-embedded-in-the-shopify-admin
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn: “The Strife Is O'er, the Battle Done” LW 143, TLH 210 Readings: Ezekiel 34:11-16, 1 Peter 2:21-25, St. John 10:11-16 Hymn of the Day: “The Lord's My Shepherd, I'll Not Want” (The Augustana Service Book and Hymnal #31, LW 416, TLH 436) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “Let All Mortal Flesh Keep Silence” LW 241 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “I Am Jesus Little Lamb” LW 517, “The King of Love My Shepherd Is” LW 412, TLH 431 “Do Not Despair, O Little Flock” LW 300 Closing Hymn “Guide Me Ever, Great Redeemer” LW 220 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Misericordias-Domini-Cover-5-4-2025-Online.pdf https://vimeo.com/1079137917?share=copy Picture: The Good Shepherd. Sculpture in marble. Rome. Catacombs of Domitila, ca. 300-350 d.
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn “As Surely as I Love, God Said” LW 235 Readings: Job 19:25-27, 1 John 5:4-10, St. John 20:19-31 Hymn of the Day: “Ye Sons and Daughters of the King” (The Augustana Service Book and Hymnal #30, LW 130, TLH 208) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “Draw Near and Take the Body of the Lord” LW #240, TLH 307 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “Christ the Lord Is Risen Today; Alleluia” LW 137, TLH 193 “Triumphant from the Grave” LW 144 “He's Risen, He's Risen” LW 138 Closing Hymn “Jesus Shall Reign” LW 312 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Quasimodo-Geniti-Cover-4-27-2025-Online.pdf Picture: Ottheinrich Bible 1430 (IV:48) Jesus Appears after the Resurrection in John 20:19-31
SUMMARY: Paul organizes an 'Improv for Podcasting' workshop in Pennsylvania. Matt enjoys the thrill of victory at a soccer game and the agony of the feces while buying a cake. Jacob gets chased down The Strip and whipped, then helps a neighbor with a car battery. Also, Scoop Mail and a Scoopardy.Go to poduty.com for info on "Improvisation for Podcasting with Paul Mattingly," coming May 24 at Poduty Live's Podcast Theater at Harrisons on Corbet in Tarentum, Pa.
Order of Divine Service I, p. 136 Lutheran Worship Hymn “Jesus Christ Is Risen, Today” LW 127, TLH 199 Readings: Isaiah 52:13-15, 1 Corinthians 5:6-8, St. Mark 16:1-8 Hymn of the Day: “Christ Jesus Lay in Death's Strong Bands” (The Augustana Service Book and Hymnal #29, LW 123, TLH 195) Sermon Offertory: "Create in Me…" p.18 Easter Prayer Hymn: “At the Lamb's High Feast We Sing” LW #126 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “Awake, My Heart, with Gladness” LW 128, TLH 192 “Lo, Judah's Lion Wins the Strife” LW 146, TLH 211 “The Day of Resurrection” LW 133, TLH 205 Closing Hymn “Christ the Lord Is Risen Today; Alleluia” LW 137, TLH 193 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Resurrection-of-Our-Lord-Cover-4-20-2025-Online.pdf https://vimeo.com/1077066336?share=copy Picture: Ottheinrich Bible 1430 (II:58) The Women Come to the Tomb in Mark 16:1-8
Order of Matins, p.208 Lutheran Worship Office Hymn “Like the Golden Sun Ascending” TLH 207 Psalmody: Psalm 92, 1, 2, 3, 99 Readings: 1 Corinthians 15:1-25, St. John 20-1-18 Sermon After Benedicamus, Paschal Blessing, LW p.244-249 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Easter-Dawn-Cover-4-20-2025-Full-Page.pdf https://vimeo.com/1077013847?share=copy Picture: Ottheinrich Bible 1430 (IV:46) Peter and John Come to the Tomb in John 20:3-9
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn of the Day: “O Sacred Head, Now Wounded” (The Augustana Service Book and Hymnal #27, LW 113, TLH 172) Bidding Prayer, p.276 Readings: Isaiah 50:6-9, Isaiah 52:13-53:12, Hosea 6:1-16, 2 Corinthians 5:14-21, St. John 18:1-19:42 Hymn “A Lamb Alone Bears Willingly” LW 111, TLH 142 Reproaches Hymn “Lamb of God, Pure and Sinless” Stanza 1 & 2 of LW 208, TLH 146, ASBH #25 Hymn “Sing, My Tongue” LW 117 Sermon Communion Hymns Hymn “Upon the Cross Extended” LW #121 Hymn “O Dearest Jesus, What Law Have You Broken” LW 119, TLH 143 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Good-Friday-Cover-4-18-2025-Online.pdf https://vimeo.com/1076801967?share=copy Picture: Ottheinrich Bible 1430 (II:57) The Crucifixion in Mark 15:21-41
SUMMARY: We talk with Max Lardent and Breon Jenay about the Fallout Fringe Festival coming to Las Vegas in June. We learn about the festival's origin, the rubric of accepting performers, doing Shakespeare with sock puppets, plus the disturbing genesis of a talking tree. Also, Scoop Mail and a Scoopardy.
Order of Confessional Service The Augustana Service Book and Hymnal (ASBH) Invocation, Versicles, p.227 Psalm 51 (insert) Exhortation p.228-229 Confession/Absolution p.230 Readings: Exodus 12:1-14, 1 Corinthians 11:23-32, St. John 13:1-15 Hymn of the Day: “The Death of Jesus Christ, Our Lord” (The Augustana Service Book and Hymnal #26, LW 107, TLH 163) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “An Awe-full Mystery Is Here” TLH 304 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Hymn “Oh, How Great Is Your Compassion” LW 364 Hymn “God Moves in a Mysterious Way” LW 426 Stripping of the Altar: Psalm 22 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Maundy-Thursday-Cover-4-17-2025-Online.pdf https://vimeo.com/1076532922?share=copy Picture: Ottheinrich Bible 1430 (IV:28) Jesus Washes the Disciples Feet in John 13:1-17
Sermon 4 13 25 11 LW by St Paul's Fayetteville
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Palm Sunday Procession (Matthew 21:1-9) Hymn “All Glory, Laud, and Honor” LW 102, TLH 160 Readings: Zechariah 9:9-10, Philippians 2:5-11, St. Matthew 26:1-27:66 Hymn of the Day: “Lamb of God, Pure and Holy” (The Augustana Service Book and Hymnal #25, LW 208, TLH 146) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “Invited, Lord, by Boundless Grace” TLH 308 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “Ride On, Ride On in Majesty” LW 105, TLH 162 “Hail, O Once Rejected Jesus” LW 284 “The Royal Banners Forward Go” LW 103, TLH 168 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Palmarum-Cover-4-13-2025-Online.pdf https://vimeo.com/1072978664?share=copy Picture: Ottheinrich Bible 1430 (III:61b) Jesus Before Pilate in Luke 23:1-25
Prosper Trading Academy's Mike Shorr turns to three trades he thinks investors have off their radars that are worthwhile in a tariff-induced environment. He talks about Lamb Weston's (LW) tariff-resistance play and how investors can "bottom pick" Ambarella (AMBA) and Skechers (SKX). Rick Ducat turns to the technical trends he sees in each company's chart.======== Schwab Network ========Empowering every investor and trader, every market day.Subscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Today's daf is sponsored by Abby Sosland in loving memory of Rabbi Henry Sosland. "He taught us that daily learning could be the ultimate source of comfort and sipuk nefesh." Today’s daf is sponsored by the Hadran Women of Long Island in honor of our friend and co-learner, Bracha Rutner, whose completion of Masechet Sanhedrin marks her siyum on all of Shas. "You dedicated the last seven and one-half years to this monumental achievement, and we are so proud that you are one of our group, and that we are able to share in your simcha! לכי מחיל אל חיל." Korach's wife convinced him to rebel against Moshe, despite Korach initially arguing against her persuasion. What were her specific complaints against Moshe and Aharon? Based on inferences from Bamidbar 16:14 and Tehillim 106:16, Rabbi Yochanan explains that they accused Moshe of engaging in relations with their wives. Moshe approached Datan and Aviram, seeking reconciliation. From this action, Reish Lakish teaches that one should actively work to resolve disputes. Different verses are brought to prove that anyone who challenges their teacher is considered as challenging God directly. There is a debate regarding Korach's fate: Was he swallowed by the earth or burned with the others who offered incense? This remains unresolved due to different interpretations of the verses. However, the Torah clearly states that Korach's sons survived. Regarding the generation that wandered in the desert, sages debate whether they will have a share in the World-to-Come. Various verses are cited to support both positions. Similarly, the fate of the ten tribes is disputed. Will they eventually return to the land or were they permanently exiled? This discussion centers on different interpretations of Devarim 29:27. Scholars also debate whether these tribes will receive a portion in the World-to-Come, with various verses brought as evidence. In both these controversies, Rabbi Akiva takes the stricter position that they will neither return nor have a share in the World-to-Come. Rabba bar bar Hanna quotes Rabbi Yochanan questioning Rabbi Akiva's stance, noting that Rabbi Akiva typically adopts more lenient positions. What is the source for Rabbi Akiva's general tendency toward leniency? From what point in development can one merit entry to the World-to-Come: from conception, birth, the ability to speak, or the ability to say "amen"?
Today's daf is sponsored by Abby Sosland in loving memory of Rabbi Henry Sosland. "He taught us that daily learning could be the ultimate source of comfort and sipuk nefesh." Today’s daf is sponsored by the Hadran Women of Long Island in honor of our friend and co-learner, Bracha Rutner, whose completion of Masechet Sanhedrin marks her siyum on all of Shas. "You dedicated the last seven and one-half years to this monumental achievement, and we are so proud that you are one of our group, and that we are able to share in your simcha! לכי מחיל אל חיל." Korach's wife convinced him to rebel against Moshe, despite Korach initially arguing against her persuasion. What were her specific complaints against Moshe and Aharon? Based on inferences from Bamidbar 16:14 and Tehillim 106:16, Rabbi Yochanan explains that they accused Moshe of engaging in relations with their wives. Moshe approached Datan and Aviram, seeking reconciliation. From this action, Reish Lakish teaches that one should actively work to resolve disputes. Different verses are brought to prove that anyone who challenges their teacher is considered as challenging God directly. There is a debate regarding Korach's fate: Was he swallowed by the earth or burned with the others who offered incense? This remains unresolved due to different interpretations of the verses. However, the Torah clearly states that Korach's sons survived. Regarding the generation that wandered in the desert, sages debate whether they will have a share in the World-to-Come. Various verses are cited to support both positions. Similarly, the fate of the ten tribes is disputed. Will they eventually return to the land or were they permanently exiled? This discussion centers on different interpretations of Devarim 29:27. Scholars also debate whether these tribes will receive a portion in the World-to-Come, with various verses brought as evidence. In both these controversies, Rabbi Akiva takes the stricter position that they will neither return nor have a share in the World-to-Come. Rabba bar bar Hanna quotes Rabbi Yochanan questioning Rabbi Akiva's stance, noting that Rabbi Akiva typically adopts more lenient positions. What is the source for Rabbi Akiva's general tendency toward leniency? From what point in development can one merit entry to the World-to-Come: from conception, birth, the ability to speak, or the ability to say "amen"?
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn "To Thy Temple I Repair" LW 207 TLH 2 Readings: Genesis 12:1-3, Hebrews 9:11-15, St. John 8:46-59 Hymn of the Day: “Lord Jesus Christ, True Man and God” (The Augustana Service Book and Hymnal #24) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “Soul, Adorn Yourself with Gladness” LW 239 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “You are the Way; to You Alone” LW 283 “Jesus, Lover of My Soul” LW 508, TLH 345 “For Jerusalem You're Weeping” LW 390 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Judica-Cover-4-6-2025-Online.pdf Picture: Ottheinrich Bible 1430 (III:49b) Jesus in the Temple in Luke 19:45-48
Gospodarz studia w Użhorodzie relacjonuje najnowsze wydarzenia z Ukrainy. Wojna trwa i miniona noc była kolejnym dowodem na rosyjską agresję.Kijów został ponownie zaatakowany dronami, szczególnie dotknięty został rejon browarski. Płonął salon samochodowy, a uszkodzeniu uległo 30 pojazdów cywilnych.W mieście Dniepr trzy osoby zostały ranne w wyniku ataku dronowego, podobnie w Zaporożu, gdzie siedem dronów dokonało zniszczeń, a 63-letni mieszkaniec odniósł obrażenia. Najtragiczniejsze wydarzenia miały jednak miejsce w Charkowie, gdzie w wyniku rosyjskiego ostrzału zginęły cztery osoby, a 32 odniosły rany- podaje Paweł Bobołowicz. Konstantynówka, miejscowość dobrze znana korespondentom Radia Wnet i polskim organizacjom humanitarnym, ponownie stała się celem ataku. Rosjanie uderzyli tam przy użyciu systemu rakietowego SMERCH. W wyniku ostrzału zginął 24-letni mężczyzna, a trzy osoby zostały ranne. Siły ukraińskie odparły ataki na froncie zaporoskim i donieckim, niszcząc kolejną rosyjską kolumnę wojskową w pobliżu miejscowości Andrijówka. W ostatniej dobie straty Rosjan wyniosły niemal 1500 żołnierzy.Dyplomatyczne napięcia i cła amerykańskiePrezydent Wołodymyr Zełeński zadeklarował, że Ukraina wciąż pozostaje otwarta na negocjacje i wstrzymanie ognia, ale Rosja nie wykazuje takiej woli.Zełeński twierdzi, że zawieszenie ognia mogłoby nastąpić w ciągu tygodni lub miesięcy, jeśli tylko istniała by wola polityczna po drugiej stronie- podaje dziennikarz Radia Wnet. Na Ukrainie szeroko komentowane jest wprowadzenie przez Stany Zjednoczone 10-procentowego cła na ukraińskie towary. W Kijowie pojawiają się pytania – dlaczego Ukraina, a nie Rosja czy Białoruś? Albo Korea Północna? Okazuje się, że mimo sankcji handel z Rosją wciąż istnieje – to 3,5 miliarda dolarów w ostatnim roku. A z Ukrainą? 112 miliardów. Francuscy dziennikarze policzyli, że gdyby zastosować wobec Rosji te same zasady, cło wyniosłoby 42%, dla Białorusi 24%. - podkreśla Paweł Bobołowicz. W tym kontekście warto przypomnieć słowa Ronalda Reagana z 1987 roku, w których ostrzegał przed konsekwencjami wojen handlowych i nadmiernymi cłami. Jego przestrogi wydają się aktualne także dziś.Jan Paweł II i jego pielgrzymka na UkrainęW tym tygodniu obchodziliśmy 20. rocznicę śmierci Jana Pawła II, co skłania do refleksji nad jego wizytą w Ukrainie w 2001 roku. To była pielgrzymka niezwykła, bo choć Ukraina to kraj w większości prawosławny, na spotkania z papieżem przychodzili nie tylko katolicy, ale i prawosławni.Jan Paweł II nie mógł pojechać do Rosji czy na Białoruś – tamte systemy nie chciały wpuścić takiego papieża – ale Ukraina go zaprosiła. Miliony wiernych, nie tylko katolików, ale także prawosławnych, uczestniczyły w spotkaniach z Ojcem Świętym we Lwowie i Kijowie- przypomina Paweł Bobołowicz.Jan Paweł II w swoich przemówieniach podkreślał chrześcijańskie korzenie Ukrainy oraz jej europejską tożsamość. Jego słowa po wylądowaniu w Kijowie brzmiały proroczo:Witam Was wszystkich umiłowani Ukraińcy. Od Doniecka po Lwów, od Charkowa po Odessę i Symferopol. (…) Wasza ojczyzna jest bramą między Wschodem a Zachodem.Jego słowa jasno odnosiły się do integralności terytorialnej Ukrainy, obejmującej także Krym. Podkreślał też konieczność pojednania i wspólnej przyszłości narodów Europy Wschodniej - przypomina dziennikarz Radia Wnet. Głos ukraińskiego żołnierzaNa zakonczenie Paweł Bobołowicz dzieli się osobistą historią:Pod Bachmutem w 2022 roku spotkałem ukraińskiego żołnierza, który w 2001 roku jako chłopiec śpiewał w chórze podczas pielgrzymki Jana Pawła II. Miał być dyrygentem, został żołnierzem. Wspominał, jak papież mówił do nich po ukraińsku, jaką miał otwartość, szczerość, jaką był wielką postacią. I dziś, walcząc na froncie, wciąż pamięta te chwile.Dziś ten sam człowiek walczy na froncie pod Bachmutem, ale wciąż pamięta tamte chwile i inspirację, jaką dał mu Święty Jan Paweł II.
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn “When All Thy Mercies, O My God” LW 196, TLH 31 Readings: Isaiah 49:8-13, Galatians 4:21-31, St. John 6:1-15 Hymn of the Day: “Christ the Life of All the Living” (The Augustana Service Book and Hymnal #23, LW 94, TLH 151) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “O Living Bread from Heaven” LW 244, TLH 316 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “By Grace I'm Saved, Grace Free and Boundless” LW 351, TLH 373 “In the Cross of Christ I Glory” LW 101, TLH 354 “In God, My Faithful God” LW 421, TLH 526 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Laetare-Cover-3-30-2025-Online.pdf https://vimeo.com/1068608532?share=copy Picture: Ottheinrich Bible 1430 (IV:4) Jesus Feeds the Five Thousand in John 6:1-15
Order of Divine Service, p.7 The Augustana Service Book and Hymnal Hymn “Blessed Jesus, at Thy Word” LW 202, TLH 16 Readings: 2 Samuel 22:1-7, Ephesians 5:1-9, St. Luke 11:14-28 Hymn of the Day: “A Mighty Fortress Is Our God” (The Augustana Service Book and Hymnal #22, LW 298, TLH 262) Sermon Offertory: "Create in Me…" p.18 General Prayer……… p.19-20 Hymn: “O Lord, We Praise You” LW 238, TLH 313 Exhortation p.21 Communion Service, p.144 (Lutheran Worship) Communion Hymns: “Jesus, Priceless Treasure” LW 270, TLH 347 “Renew Me, O Eternal Light” LW 373, TLH 398 “Let Us Ever Walk with Jesus” LW 381, TLH 409 --Michael D. Henson, Pastor of Trinity Lutheran Church (Herrin, IL). Service Bulletin: Oculi-Cover-3-23-2025-Online-b.pdf https://vimeo.com/1066340626?share=copy Picture: Ottheinrich Bible 1430 (III:26) Jesus and Beelzebub in Luke 11:14-26
Scheim missed out on buying a Cheeto that looks like a Pokemon // Is Brad Marchand the best LW in Bruins history? // Coco drops a bomb, says she was whispered to about Marchand's injury //
It looks like the Tee Higgins sweepstakes is over as a franchise tag looms // Time is ticking to figure out whether to trade or extend Marchand // Courtney is riddled with anxiety over all the airplane incidents as of late // Determining where Shedeur Sanders will be drafted and how it affects Pats // They Said It: Paul Pierce says the Celtics are the most hated franchise Scheim missed out on buying a Cheeto that looks like a Pokemon // Is Brad Marchand the best LW in Bruins history? // Coco drops a bomb, says she was whispered to about Marchand's injury // The NFL puts the kibosh on Belichick and UNC filming Hard Knocks // Hill Notes keeps getting weirder and weirder // Wiggy recounts the time he had to be forcibly removed from a radio station //
Noche histórica de Oscars para Sean Baker. El director estadounidense triunfa con 'Anora', ganadora de cinco premios de la Academia de Hollywood, y además el autor hace historia con cuatro premios en una misma noche (película, dirección, guion y montaje), algo que no había conseguido nadie. Solo Walt Disney ganó en 1954 cuatro como productor en una edición y el coreano Bong Joon-ho logró cuatro por 'Parásitos' (pero alguno compartido). Mikey Madison también da la sorpresa y conquista el premio a mejor actriz protagonista por delante de otras favoritas como Demi Moore y Fernanda Torres. Sin sorpresas en el resto de categorías con Adrien Brody, Zoe Saldaña y Kieran Culkin completando el póker de intérpretes. La brasileña 'Aún estoy aquí' le arrebata a 'Emilia Pérez' el premio a Mejor Película Internacional
Tony and good buddy Bill Dietz travel north to Lake Winnipeg in search of “Greenbacks”. Tony has a three part interview this week. It includes Lee Nolden former guide and bait shop owner in Selkirk,MB. Lee talks the history of the fishery and how this whole thing got started. Donovan Pearase, fishing guide and owner of Blackwater Cats Outfitter. Donovan sits down on the ice to chat about his backstory, the status of the lake and what anglers can expect if they are planning a trip to LW. Part three Bill and Tony sit down to put a wrap on their trip…and tease out a little of this week's episode. Presented by: Strike Master (https://www.rapala.com/us_en/strikemaster), On-X Fish (www.onxmaps.com/fish) & St. Croix Rods (https://stcroixrods.com/)
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe
Andrew & Michael Jaworskyj were expecting to round off the year and the start of the winter break in slightly more low key fashion until this week's news came along... The pair talk: MUDRYK POSITIVE DOPING TEST What we know What is Meldonium? Sabotage accusations Lie detectors Chelsea, Maresca and Mudryk on the case thus far Unprecedented times? Mudryk's timeline since joining Chelsea UKRAINE IN 2025 Ukraine WCQ draw France/Croatia + Iceland + Azerbaijan Without Mudryk potentially - how does Rebrov adapt? LW options? Is a 5-3-2 permanent switch up viable? Zabarnyi's sensational form EUROPEAN DESPAIR & UPL DISCUSSION UCL/UEL Disasters… How can any big price tags really be justified right now if performances in Europe are so damn awful? Pusic to stay because who else realistically is there? Hryhorchuk - Carrasco's successor at LNZ Chornomorets – transfer ban for outstanding debts Vorskla clear out Krupskyi to MLS latest… Listen to the above and MUCH, MUCH MORE in our latest episode! ********************************************** Want to help the families of fallen ultras cope through the first difficult months without their husbands, partners, fathers, brothers and sons? More Info & ways to donate here: standsofheroes.com ************************************************ Please subscribe to Ukraine + Football on your favoured podcast provider and leave a review if you are able to! You can also RATE us on Apple Podcasts & NOW Spotify - please give us 5 stars if you are able to! We are also now on YOUTUBE - for vlogs and live streams please subscribe here: https://www.youtube.com/channel/UCyiNMhP18iGwwov5FkcMY7Q Please email any questions, feedback or ideas to: ukraineplusfootball@gmail.com
S&P Futures are positive this morning as market rebound from yesterday's selling pressure. Fed delivered a rate cut and lowered its forward guidance on rates, however the narrative from the fed was rather hawkish which caused heavy selling pressure. Congressional leaders budget deal is running into some stiff pushback from republican lawmakers. Earnings reports after the bell today from NKE & FDX. MU released forward guidance that disappointed the street. Shares of LW are under pressure this morning as company announced earnings along with a new CEO. Lennor (LII) will be joining the S&P 500 on Dec 23rd. Ticker BILL to replace LII in the S&P 400. In Europe stocks trading lower and oil prices are displaying slight losses in the pre-market.
Die Karriere des polnischen Mathematikers Stefan Banach beginnt in den 1920er-Jahren im damaligen Lwów. Das heute legendäre Schottische Café ist für ihn und seine Kollegen Spielwiese und Laboratorium für mathematische Innovationen. Die Idee für diesen Podcast hat Demian Nahuel Goos am MIP.labor entwickelt, der Ideenwerkstatt für Wissenschaftsjournalismus zu Mathematik, Informatik und Physik an der Freien Universität Berlin, ermöglicht durch die Klaus Tschira Stiftung. (00:00:10) Einleitung (00:02:55) Dynamische Zeiten für die Mathematik (00:04:03) Banach und die Mathematik (00:06:27) Eine schicksalhafte Begegnung (00:09:21) Die Lemberger Mathematikerschule und das Schottische Café (00:12:09) Das Problem mit dem Schinkensandwich (00:15:39) Die Pizza-Teilung und der Zwischenwertsatz (00:18:31) Ham-Sandwich-Theorem: Das Schinkensandwich in der Gaskugel (00:22:28) Christmases & Coffeehouses (00:25:27) Verabschiedung >> Artikel zum Nachlesen: https://detektor.fm/wissen/geschichten-aus-der-mathematik-stefan-banach
Here is a little gift to our GK and Friends subscribers. (In the Back Room, the paid subscribers receive a monologue from the 80's weekly)12.18.82Calm falls over LW the week before Christmas, yet there still are no Christmas lights on Main St. GK went back after school was done and went to Christmas Eve service with the eldest Ingqvist daughter (they had been chatting through the fall). He was anything but calm at the service. As they sang “Silent Night” at the service, the congregation broke into tears. No matter how hard he tried he couldn't bring up a tear. After the service, the two of them were talking and he said something he thought was funny. She told him how terribly cold he was and she didn't want to see him anymore. Now the tears came. GK also remembered the story of the Lundeen family, Mel and Clarice and their eight children. Mel had fallen off the barn and was in the hospital for four months. Christmas was going to be sparse during this time and James was particularly disappointed he wasn't getting a Lionel train set. BUT when dad finally came home just before Christmas, they all learned this was the greatest present. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit garrisonkeillor.substack.com/subscribe
“They rat-a-tat off each other, and Joe Pesci really well...” - Chris, on the cast's chemistry On this month's total banger We ❤️ Movies episode, we're gifting ourselves an early Christmas present by re-watching and talking all about the total I-Got-It-By-The-Way sequel, Lethal Weapon 2! How fantastic is that toilet sequence? Is this the only LW film where Pesci is tolerable? How funny is it that Steve Kahan, the actor who plays Captain Murphy, looks and sounds exactly like the late, great Dick Donner? And why couldn't they have kept Shane Black's—whoa there! This episode is for subscribers only! To access the full show, along with countless hours of exclusive shows you can't get anywhere else, sign up for our Patreon today! You'll get this full episode on the fantastic Lethal Weapon 2, along with access to our other shows like Animation Damnation, The Nexus, MELR0210 & more!
This issue of LW focuses on the community of family. Peter reminds us that Christians have a much larger family, our “brotherhood throughout the world” (1 Peter 5:9). Paul says something similar in 1 Corinthians 1:2: “To the church of God that is in Corinth, to those sanctified in Christ Jesus, called to be saints together with all those who in every place call upon the name of our Lord Jesus Christ, both their Lord and ours.” While we properly focus most of our attention on our local communities of family and congregation, our Lord has a global mission: “This gospel of the kingdom will be proclaimed throughout the whole world as a testimony to all nations, and then the end will come” (Matthew 24:14). Peter's emphasis on the common suffering of Christians throughout the world is a reminder that we are part of something much greater than ourselves, as Paul also writes: “There is one body and one Spirit — just as you were called to the one hope that belongs to your call — one Lord, one faith, one baptism, one God and Father of all, who is over all and through all and in all” (Ephesians 4:4–6). Rev. Carl Roth, pastor of Grace Lutheran Church in Elgin, TX, joins Andy and Sarah to talk about the “Searching Scripture” feature in the December 2024 issue of the Lutheran Witness titled “A Worldwide Community” on 1 Peter 5:8-14. This year, “Searching Scripture” is themed “Elect Exiles” and will walk through the First Epistle of St. Peter. Follow along every month and search Scripture with us! Find online exclusives of the Lutheran Witness at witness.lcms.org and subscribe to the Lutheran Witness at cph.org/witness.
This issue of LW focuses on the community of family. Peter reminds us that Christians have a much larger family, our “brotherhood throughout the world” (1 Peter 5:9). Paul says something similar in 1 Corinthians 1:2: “To the church of God that is in Corinth, to those sanctified in Christ Jesus, called to be saints together with all those who in every place call upon the name of our Lord Jesus Christ, both their Lord and ours.” While we properly focus most of our attention on our local communities of family and congregation, our Lord has a global mission: “This gospel of the kingdom will be proclaimed throughout the whole world as a testimony to all nations, and then the end will come” (Matthew 24:14). Peter's emphasis on the common suffering of Christians throughout the world is a reminder that we are part of something much greater than ourselves, as Paul also writes: “There is one body and one Spirit — just as you were called to the one hope that belongs to your call — one Lord, one faith, one baptism, one God and Father of all, who is over all and through all and in all” (Ephesians 4:4–6). Rev. Carl Roth, pastor of Grace Lutheran Church in Elgin, TX, joins Andy and Sarah to talk about the “Searching Scripture” feature in the December 2024 issue of the Lutheran Witness titled “A Worldwide Community” on 1 Peter 5:8-14. This year, “Searching Scripture” is themed “Elect Exiles” and will walk through the First Epistle of St. Peter. Follow along every month and search Scripture with us! Find online exclusives of the Lutheran Witness at witness.lcms.org and subscribe to the Lutheran Witness at cph.org/witness. As you grab your morning coffee (and pastry, let's be honest), join hosts Andy Bates and Sarah Gulseth as they bring you stories of the intersection of Lutheran life and a secular world. Catch real-life stories of mercy work of the LCMS and partners, updates from missionaries across the ocean, and practical talk about how to live boldly Lutheran. Have a topic you'd like to hear about on The Coffee Hour? Contact us at: listener@kfuo.org.
¿Estáis preparados para un diciembre con mucho y buen cine? Pues nos vamos a hartar porque el mes viene cargadito, y además, con el estreno de una de las películas del año, 'Emilia Pérez', el narco musical trans que triunfó en Cannes, que apunta a los Oscar y que protagoniza Karla Sofía Gascón, la actriz española que está revolucionando Hollywood. Lo analizamos a fondo y charlamos con ella y con Jacques Audiard, y además comentamos la última película de Robert Zemeckis con Tom Hanks y Robin Wright, las comedias españolas que buscan petarlo en este puente y otras propuestas. En televisión, ponemos un poco de orden entre tantas novedades con las últimas series internacionales que merece la pena ver.
Today's Sports Daily covers your College (45-46-1, 4-5 LW) and Pro picks ((32-36, 3-2 LW, 9-3 on Best Bets for the season) for the weekend, including two plays today!Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)Ads:BetOnline - America's Most Trusted Site For Online Wagering
Today's Sports Daily covers your College (37-39-1, 2-6 LW, Best Bet blowout winner LW!) and NFL Plays (27-31, 5-1 LW, 7-3 on Best Bets for the season!) for this upcoming weekend.Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)Ads:BetOnline - America's Most Trusted Site For Online Wagering
Today's Sports Daily covers your College (35-33-1, 5-4 LW, 5-5 on Best Bets) and NFL plays (22-30, 3-3 LW, 6-3 on Best Bets incl a 1-0 start to this week with a winner on Cincinnati +6 last night!) Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)Ads:BetOnline - America's Most Trusted Site For Online Wagering
This issue of LW focuses on the LCMS community, which is made up of “flock[s] of God” (1 PETER 5:2) under “the chief Shepherd” (1 Peter 5:4), who sends men to shepherd His holy people (in Latin, pastor means “shepherd”). If both preachers and hearers heed Peter's advice, congregations will be blessed with peace and concord; if they don't, dissension and conflict will ensue. Peter encourages humility for all Christians (1 Peter 5:6–7). Although he doesn't mention the example of Jesus in this passage, He should be on our minds as we act in humility toward one another and humble ourselves under the mighty hand of God: “Have this mind among yourselves, which is yours in Christ Jesus, who, though He was in the form of God, did not count equality with God a thing to be grasped, but emptied Himself, by taking the form of a servant, being born in the likeness of men. And being found in human form, He humbled Himself by becoming obedient to the point of death, even death on a cross” (Philippians 2:5–8). Rev. Carl Roth, pastor of Grace Lutheran Church in Elgin, TX, joins Andy and Sarah to talk about the “Searching Scripture” feature in the November 2024 issue of the Lutheran Witness titled “A Humble Community” on 1 Peter 5:1-7. This year, “Searching Scripture” is themed “Elect Exiles” and will walk through the First Epistle of St. Peter. Follow along every month and search Scripture with us! Find online exclusives of the Lutheran Witness at witness.lcms.org and subscribe to the Lutheran Witness at cph.org/witness.
This issue of LW focuses on the LCMS community, which is made up of “flock[s] of God” (1 PETER 5:2) under “the chief Shepherd” (1 Peter 5:4), who sends men to shepherd His holy people (in Latin, pastor means “shepherd”). If both preachers and hearers heed Peter's advice, congregations will be blessed with peace and concord; if they don't, dissension and conflict will ensue. Peter encourages humility for all Christians (1 Peter 5:6–7). Although he doesn't mention the example of Jesus in this passage, He should be on our minds as we act in humility toward one another and humble ourselves under the mighty hand of God: “Have this mind among yourselves, which is yours in Christ Jesus, who, though He was in the form of God, did not count equality with God a thing to be grasped, but emptied Himself, by taking the form of a servant, being born in the likeness of men. And being found in human form, He humbled Himself by becoming obedient to the point of death, even death on a cross” (Philippians 2:5–8). Rev. Carl Roth, pastor of Grace Lutheran Church in Elgin, TX, joins Andy and Sarah to talk about the “Searching Scripture” feature in the November 2024 issue of the Lutheran Witness titled “A Humble Community” on 1 Peter 5:1-7. This year, “Searching Scripture” is themed “Elect Exiles” and will walk through the First Epistle of St. Peter. Follow along every month and search Scripture with us! Find online exclusives of the Lutheran Witness at witness.lcms.org and subscribe to the Lutheran Witness at cph.org/witness.
Today's Sports Daily covers your plays in College Football (30-29-1, Best Bet winner last week!) and the NFL (19-27, 2-4 LW, 6-2 on Best Bets this season!) as I'm looking at some barking home underdogs in college this week and ugly dogs in pros.Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)Ads:BetOnline - America's Most Trusted Site For Online Wagering
Today's daf is sponsored by the Greenstone cousins in honor of Lana Kerzner's birthday. "With love to our dear cousin Lana. Your commitment to learning is a profound tribute to the legacy of our parents, a testament to the values they instilled in us. May the merits of this learning bring you peace, joy, and health this year and every year, not only for yourself but as a blessing to all those around you." Today's daf is sponsored by Gabrielle and Daniel Altman in loving memory of Lisa Altman z"l on her 20th yahrzeit. "We miss her love, warmth, kindness, wisdom and spirit. Her memory and legacy will remain with us always." There are various halakhot relevant to males that do not apply to a tumtum (one whose genitals are covered up and it is unclear if they are male or female) whose skin is then perforated and is found to be a male. He cannot inherit as a firstborn, he cannot become a ben sorer u'moreh, his brit milah does not override Shabbat, and his mother does not have laws of impurity of a woman who gave birth. A difficulty is raised against two of these laws from a Mishna in Nidda 28a. A braita is brought to support the position that a tumtum described above cannot inherit a double portion as a firstborn. The braita also derives that one cannot be a firstborn if it is doubtful whether or not he is the firstborn. The Gemara then explains why this was stated - to explain that if two brothers are born at around the same time (from two different mothers) but it was dark and it was impossible to determine who was born first, no one receives the double portion. Rava held otherwise - they could each write an authorization that "If I am the firstborn, I give you my share," and they can jointly receive the double portion. However, Rav Pappa raised a difficulty with Rava's position and Rava retracted. A father is believed to say a particular son is the firstborn but what if there is a chazaka that a different child is the firstborn? Shmuel ruled that the two brothers write an authorization as mentioned above. The Gemara explains Shmuel's position that he was unsure whether the ruling is like Rabbi Yehuda, who believes a father in that case, or the rabbis who do not accept the father's testimony when there is a chazaka. If the rabbis don't accept the father's testimony, for what purpose did the verse in the Torah use the language of "yakir"? If the father could have given the son a double portion as a gift, it would have been effective, so of course then we can believe the father that this is the firstborn?! The answer is that the father could have only given a double portion as a gift to the son for property in his possession at the time or possibly for items that would later be in his possession (according to Rabbi Meir), but it would not have covered property that would be brought into the father's possession as he was dying. For this situation, the verse taught "yakir." Regarding believing a father about the status of his son, Rabbi Yochanan describes a situation in which a father says that a person is his son and then says that he is his Caananite slave. He is not believed to render the person a slave as he would never have called his slave his son in the first place. However, if he first called him his slave and then his son, we accept his last words as it's possible he meant originally that the son served him like a slave. The reverse is true for one who made a statement in front of the tax authorities. They raise a difficulty against Rabbi Yochanan from a braita, but resolve it.
Today's daf is sponsored by the Greenstone cousins in honor of Lana Kerzner's birthday. "With love to our dear cousin Lana. Your commitment to learning is a profound tribute to the legacy of our parents, a testament to the values they instilled in us. May the merits of this learning bring you peace, joy, and health this year and every year, not only for yourself but as a blessing to all those around you." Today's daf is sponsored by Gabrielle and Daniel Altman in loving memory of Lisa Altman z"l on her 20th yahrzeit. "We miss her love, warmth, kindness, wisdom and spirit. Her memory and legacy will remain with us always." There are various halakhot relevant to males that do not apply to a tumtum (one whose genitals are covered up and it is unclear if they are male or female) whose skin is then perforated and is found to be a male. He cannot inherit as a firstborn, he cannot become a ben sorer u'moreh, his brit milah does not override Shabbat, and his mother does not have laws of impurity of a woman who gave birth. A difficulty is raised against two of these laws from a Mishna in Nidda 28a. A braita is brought to support the position that a tumtum described above cannot inherit a double portion as a firstborn. The braita also derives that one cannot be a firstborn if it is doubtful whether or not he is the firstborn. The Gemara then explains why this was stated - to explain that if two brothers are born at around the same time (from two different mothers) but it was dark and it was impossible to determine who was born first, no one receives the double portion. Rava held otherwise - they could each write an authorization that "If I am the firstborn, I give you my share," and they can jointly receive the double portion. However, Rav Pappa raised a difficulty with Rava's position and Rava retracted. A father is believed to say a particular son is the firstborn but what if there is a chazaka that a different child is the firstborn? Shmuel ruled that the two brothers write an authorization as mentioned above. The Gemara explains Shmuel's position that he was unsure whether the ruling is like Rabbi Yehuda, who believes a father in that case, or the rabbis who do not accept the father's testimony when there is a chazaka. If the rabbis don't accept the father's testimony, for what purpose did the verse in the Torah use the language of "yakir"? If the father could have given the son a double portion as a gift, it would have been effective, so of course then we can believe the father that this is the firstborn?! The answer is that the father could have only given a double portion as a gift to the son for property in his possession at the time or possibly for items that would later be in his possession (according to Rabbi Meir), but it would not have covered property that would be brought into the father's possession as he was dying. For this situation, the verse taught "yakir." Regarding believing a father about the status of his son, Rabbi Yochanan describes a situation in which a father says that a person is his son and then says that he is his Caananite slave. He is not believed to render the person a slave as he would never have called his slave his son in the first place. However, if he first called him his slave and then his son, we accept his last words as it's possible he meant originally that the son served him like a slave. The reverse is true for one who made a statement in front of the tax authorities. They raise a difficulty against Rabbi Yochanan from a braita, but resolve it.
Today's Sports Daily covers your College (28-22-1, 4-2 LW) plays for the weekend including EIGHT other plays that I'm 21-13-1 on the season, plus your NFL plays (17-23, 3-3 LW, 5-2 on Best Bets for the season!)
Today's Sports Daily covers this weekends picks in College (24-20-1, 5-3 LW, Best Bet Winner last week on Texas) and Pros (14-20, 2-4 LW, Best Bet Winner last week on Lions and already 1-0 this week!).Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)Ads:BetOnline - America's Most Trusted Site For Online WageringWarren Sharp Football - 559 Page NFL Preview Magazine for $5! (normally $35) Promo Code: STEVE
Today's Sports Daily covers your weekend picks in College FB (19-17-1, 4-3 LW) and the NFL (12-16-1, 2-3-1 LW) as I also have thoughts (and picks) on the biggest games of the weekend. Music written by Bill Conti & Allee Willis (Casablanca Records/Universal Music Group)
BANG! @southernvangard radio Ep410! Well this is long overdue. This was recorded the last Sunday in August, but one of Doe's kids had his appendix out (he's doing great btw) the day the show was supposed to post. Doe completely forgot to post the show until Meeks reminded him over Labor Day Weekend. Doe's turntables have also been out of commission for the last week so we didn't do Ep411 this week. Long story short, it's been a trying two weeks here at Southern Vangard Radio - excuses aside, here's episode Ep410, and we'll be back on the block in true form for Ep411 on Sunday. THAAAAANK YAAA and YOU WAAAAALCOME!!!!! #SmithsonianGrade #WeAreTheGard // southernvangard.com // @southernvangard on all platforms #hiphop #undergroundhiphop #boombap ********** Recorded live August 25, 2024 @ Dirty Blanket Studios, Marietta, GA southernvangard.com @southernvangard on all platforms #SmithsonianGrade #WeAreTheGard twitter/IG: @southernvangard @jondoeatl @cappuccinomeeks ********** Pre-Game Beats - Da Grand Hova "Southern Vangard Theme" - Bobby Homack & The Southern Vangard All-Stars Talk Break Inst. - "Still Sharp" - DJ Pocket "High Vibrations" - Devine Carama ft. Che Noir & Deacon The Villain "Grow Up" - Es x Shark (prod. Pro Logic) "Upscale" - Planet Asia & 38 Spesh ft. The Musalini & A Plus Tha Kid "Coast" - Mute Won ft. Left Lane Didon Talk Break Inst. - Bet You Won't Say That Shit" - DJ Pocket "Crumble Cake" - Planet Asia & 38 Spesh "You Need To Know" - Rhinocerous Funk + Silent Someone "Genes" - Benny Watts ft. Jakoby "Elohim (Remix)" - Devine Carama ft. Tony Wavy & Deacon The Villain "Soy Tu Papa" - Kingdom Kome & RUEN "Joseph Bologne" - WATERR (prod. Tone Beatz) "Liquor Store Liason" - Mickey Diamond "LW$" - Bub Rock ft. Mar & Rome Streetz Talk Break Inst. - She Major" - DJ Pocket "Welcome Back" - Vice Souletric, RJ Payne, Juggernaut June "Stand Up" - Sonny Reddz ft. JoJo Pellegrino X GAWDS "FIGMENT" - CERTAIN.ONES "Call Of Duty" - Benny Watts ft. Born Unique & Rah Skrilla "Ruth Chris" - Aaron Shakur ft. Skrewtape "Kelly Tripucka" - Jalen Frazier ft. J Classic "Going Back To London (Cousin Avi)" - Curly Castro "Beautiful" - Ka Talk Break Inst. - Never Should Have Loverd H.E.R" DJ Pocket "Iron Fence" - J Scott Da Illest
On this edition of the ArsenalVision Podcast, Elliot is joined by Tim and Paul to discuss the win over Lyon and begin to think about the team for the Wolves game on Saturday. The pod starts with some analysis of Arsenal's pressing and set pieces against Lyon and asks whether the attack is creating enough from open play. A few individual performances are covered. Then there's a section on some positional battles ahead of the new season including at LW, striker, and the number 6 position. After that there's a section on transfer news and the freak out that has inevitably followed. All that and more on this edition of the ArsenalVision Podcast. Signup for our Patreon at patreon.com/arsenalvisionpodcast EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/ARSENALVISION Try it risk-free now with a 30-day money-back guarantee! Interested in advertising on this podcast? Email sales@bluewirepods.com Learn more about your ad choices. Visit podcastchoices.com/adchoices