POPULARITY
Categories
Jones and Keefe recap every week two game thus far in Wha' Happened. The Patriots traded Ja'Lynn Polk to the New Orleans Saints. Is he the worst Patriots draft pick ever?
Jones and Keefe were joined by Patriots linebacker Christian Elliss to discuss the play of the Patriots defense in yesterday's win. The guys also talked about head coach Mike Vrabel's interview with the Greg Hill Show and the latest on the status of cornerback Christian Gonzalez. Jones and Keefe took a lap around the rest of the NFL in the Week 2 edition of Wha' Happened?. Finally, the guys discussed the Patriots decision to trade wide receiver Ja'Lynn Polk.
Patriots linebacker Christian Elliss joined Jones and Keefe to discuss yesterday's loss and the play of the team's defense. Before talking about the Patriots debut of wide receiver Stefon Diggs, the guys recapped some of the other highlights from Week 1 in Wha' Happened?.
"The Nightwave Special”, hosted by Dirk Deafner, is a music show dedicated to capturing the essence of the night through a blend of sexy, moody, and occasionally upbeat electronic tracks. The program features a diverse mix of genres, all chosen to complement the nocturnal atmosphere. Whether you're preparing for a night out, winding down after an event, driving through the city streets, or hosting a cozy gathering at home, The Nightwave Special sets the perfect mood. Feel the night, feel the vibes…this is, The Nightwave Special. ⚡️Like the Show? Click the [Repost] ↻ button so more people can hear it!
Wha-cha do-win right now? It's time to tur-nup the volume and lih-seh-up to this lesson about link-in' sounds in English. Today, we're continuing our series on how to speak more naturally in English, and this time we're talking about something that really helps you sound more fluent — and that's linking sounds. My AI English Tutor is HERE: Join my Podcast Learner's Study Group here: https://learn.myhappyenglish.com/plsgVisit my website for over 3,000 free English lessons: https://www.myhappyenglish.com/
After surviving WTJT in Hamburg, the experienced excruciating physical and mental pain was NO REASON for B.Engel and Tschebesta to take the trip to the small city of Wels, Austria and watch our beloved Turboneger performing in a solo show. Wha...
Send us a textPlease take our survey and provide feedback! Thank you.https://cincinnati.ca1.qualtrics.com/jfe/form/SV_bfnkUqsHT6PIj3MSummaryAndy Mac! Stingers, UC Hoops, Muskie Hoops, Reds, Bengals. A legend.Cincinnati broadcasting legend Andy MacWilliams shares his journey through the world of sports broadcasting, from his early days in Albany to becoming a prominent voice in Cincinnati sports. He reflects on his experiences with the Cincinnati Stingers, his time with the Reds, and his insights into the current state of baseball. The conversation also touches on memorable players, the evolution of broadcasting, and personal anecdotes that highlight Andy's passion for sports in Cincinnati.TakeawaysAndy MacWilliams reflects on his 50-year journey in Cincinnati.He shares a memorable home run story from his youth.The transition from the WHA to the NHL was challenging for the Stingers.Andy discusses the significance of the 1987 Reds season.He emphasizes the importance of play-by-play in sports broadcasting.His friendship with Bob Costas shaped his early career.Andy highlights the evolution of baseball and its players over the years.He shares insights on current Reds players and their potential.The impact of broadcasting legends like Joe Nuxhall on Cincinnati sports.Andy discusses the cultural significance of Cincinnati chili.Sound Bites"I had to have a paycheck. Come on.""I think I'll go with Skyline.""I watched that Game 6 of the World Series."
Each time someone wants to become a British Citizen, they have to pass the ‘Life In The UK' Test. The aspiring Britisher (such as Jason) might hope that this test would be comprised of a series of questions that would highlight Britain's role as a global orderer, help prospective citizens understand the intricacies of British queuing culture and provide insights into how to pay council tax, or get on the ballot for Wimbledon tickets… but in reality: the test is about as Disorderly as the world is currently. In this episode, Jane and Jason discuss the intricacies and absurdities of Jason's experience taking the Life in the UK test. Jason quizzes Jane on what he was made to learn about the supposed essentials of British life, with questions such as: what is a Welsh cake made from? Who won against the Vikings? And ‘what many of crosses compose the Union Flag? [sic]' And on a serious note, they discuss the very special intellectual contribution of Scots to global civilization and –as they Order the Disorder – they talk about whether Britain can be a convening power – and a genuine Mega Orderer in the mid 21st century world. Producer: George McDonagh Subscribe to our Substack - https://natoandtheged.substack.com/ Disorder on YouTube - https://www.youtube.com/@DisorderShow Show Notes Links: Recipe for Welsh cakes: https://www.bbc.co.uk/food/recipes/welsh_cakes_16706 Can YOU pass a UK citizenship test? Brits joke they 'better pack their bags' after struggling to answer the general knowledge questions: https://www.dailymail.co.uk/femail/article-14849235/Can-YOU-pass-UK-citizenship-test-Brits-joke-better-pack-bags-struggling-answer-general-knowledge-questions.html More than half of the population are unable to pass the UK citizenship test - but how well would YOU do? https://www.dailymail.co.uk/news/article-13584185/more-than-half-of-the-population-are-unable-to-pass-the-uk-citizenship-test-but-how-well-would-you-do.html Wha's like us? Damn few' and they're A' deid: https://www.robbiemactours.co.uk/whas-like-us-damn-few-and-theyre-a-deid/ Learn more about your ad choices. Visit megaphone.fm/adchoices
In hour 1, Chris rants about 'Feels-Like' temperatures that prove your thermometer is off by 15 degrees! Wha? Also, Rosie O'Donnell is still yelling about Trump from Ireland, but will she force the island to capsize? For more coverage on the issues that matter to you, download the WMAL app, visit WMAL.com or tune in live on WMAL-FM 105.9 from 9:00am-12:00pm Monday-Friday To join the conversation, check us out on X @WMAL and @ChrisPlanteShow Learn more about your ad choices. Visit podcastchoices.com/adchoices
Two benchmark events in early summer kicks off every NHL season with the Draft and the opening of free agency. With the Draft coming up first, Vic and Neil look back to the structure of the process, how players selected were communicated with, the effect of the WHA in the early 1970's and whether social media and instant communication adds additional players on current day draftees.#NHLWraparound #ShortShifts #NYCentric #StanleyCupdate #NeilSmith #VicMorren #NHL #HumanSideoftheStory #
It's our 400th, so we're going big with a guest who's called it all, seen it all, and somehow lived to laugh about it. Steve Albert ("A Funny Thing Happened on the Way to the Broadcast Booth") -- Hall of Fame broadcaster and proud member of the legendary Albert sportscasting family (including nephew/Episode 320 guest Kenny) -- joins us for a deep dive into his one-of-a-kind, 45-year ride through the wilds of professional sports. From vanished leagues to unforgettable fights, from Brooklyn bedrooms-turned-broadcast-booths to center stage at Showtime Championship Boxing, Albert's stories are equal parts history and hilarity. In this special milestone episode, we retrace Albert's journey through memorable stops like: The WHA's Cleveland Crusaders, where his broadcast partner was the coach's elbow-needling wife; The MISL's New York Arrows, where goal-scoring was nonstop and whiplash an occupational hazard; The final ABA game ever played, which he and his older brother Al called from opposing sides; 30+ years across the NBA, including 19 seasons with the New York and New Jersey versions of the Nets, and a career-capping, Emmy-winning turn with the Phoenix Suns; Local New York TV sports anchor stints, where juggling 6 o'clock newscasts and rush-hour traffic to call evening games became an art; And, of course, his nearly quarter-century ringside seat with Showtime Championship Boxing -- including the infamous Tyson–Holyfield (II) “Bite Fight” We also talk about growing up in a house where three brothers fought over the mic instead of the remote, how a botched bathroom door nearly derailed a broadcast, and why the strangest moments in sports often happen outside the lines of the game. + + + SUPPORT THE SHOW: Buy Us a Coffee: https://ko-fi.com/goodseatsstillavailable "Good Seats" Store: https://www.teepublic.com/stores/good-seats-still-avalable?ref_id=35106 BUY THE BOOK (AND SUPPORT THE SHOW!): "A Funny Thing Happened on the Way to the Broadcast Booth": https://amzn.to/4negHqc SPONSOR THANKS (AND SUPPORT THE SHOW!): Old School Shirts.com (10% off promo code: GOODSEATS): https://oldschoolshirts.com/goodseats Royal Retros (10% off promo code: SEATS): https://www.503-sports.com?aff=2 Old Fort Baseball Co. (15% off promo code: GOODSEATS): https://www.oldfortbaseballco.com/?ref=seats Yinzylvania (20% off promo code: GOODSEATSSTILLAVAILABLE): https://yinzylvania.com/GOODSEATSSTILLAVAILABLE 417 Helmets (10% off promo code: GOODSEATS): https://417helmets.com/?wpam_id=3 FIND AND FOLLOW: Linktree: https://linktr.ee/GoodSeatsStillAvailable Web: https://goodseatsstillavailable.com/ Bluesky: https://bsky.app/profile/goodseatsstillavailable.com X/Twitter: https://twitter.com/GoodSeatsStill YouTube: https://www.youtube.com/@goodseatsstillavailable Threads: https://www.threads.net/@goodseatsstillavailable Instagram: https://www.instagram.com/goodseatsstillavailable/ Facebook: https://www.facebook.com/GoodSeatsStillAvailable/ LinkedIn: https://www.linkedin.com/company/good-seats-still-available/
Richard, Jessamy, and Gavin reflect on developments at the 78th World Health Assembly, including the passage of the pandemic agreement and shifting dynamics in global health leadership. What's next for WHO without US engagement? How has WHA changed over the years?We also address the importance of recommitting to adolescent health following our new Commission, and discuss a controversial recent study about the pace of scientific innovation.You can read the second Lancet Commission on Adolescent Health & Wellbeing here:https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(25)00503-3/fulltext?dgcid=buzzsprout_icw_podcast_lancetadolescenthealth25_lancetThis is the paper on the pace of scientific innovation discussed on the podcast:https://www.nature.com/articles/d41586-025-01548-4Send us your feedback!Read all of our content at https://www.thelancet.com/?dgcid=buzzsprout_tlv_podcast_generic_lancetCheck out all the podcasts from The Lancet Group:https://www.thelancet.com/multimedia/podcasts?dgcid=buzzsprout_tlv_podcast_generic_lancetContinue this conversation on social!Follow us today at...https://thelancet.bsky.social/https://instagram.com/thelancetgrouphttps://facebook.com/thelancetmedicaljournalhttps://linkedIn.com/company/the-lancethttps://youtube.com/thelancettv
TACO Tuesday! David Waldman gets us this far, at least. Donald K. Trump wants to run the country and the world but needs to maintain his golf game. Donald juggles his many momentous obligations by, well, golfing. Also, Trump has his iPhone, which anyone, anywhere in the world can call anytime they want to hear a “Wha, huh?' directly from the most powerful man on Earth... if he's not text messaging. This system worked perfectly while Elon was around. Musk has since moved on, but did pass on the gift of incel focus to his old pal Stephen Miller, who is now putting all of his newfound extra spare time into his job. It's Stephen Miller vs. Harvard, only one will be left standing, unless of course, TACO. Actor Jonathan Joss was killed in a San Antonio shooting. Police see no evidence of a hate crime, other than Joss' complaints of constant verbal harassment, his house being burnt to the ground, and his dog being beheaded... After all, you don't need much of a reason to shoot someone in Texas. Then there's Texas' former solicitor general, who really knows how to put the “ass” in “asteroid” … or maybe vice versa. Over in Idaho, Trump USDA nominee Michael Boren built himself a cabin on National Forest land, then diverted a stream and put in an airstrip just like a pioneer.
Looking into the Bullwhip effect from last week's Road Check blitz - How did the market respond? Then, Dean and Dan dive into a brief rant about AI, its impact on logistics, and what the future could hold, based on the insights of some of the most prominent minds of the next "Industrial Revolution."
The World Health Assembly, the highest decision-making body of the World Health Organization, on Monday decided not to include in its agenda a proposal on the participation of China's Taiwan province in the annual assembly as an observer.世界卫生组织最高决策机构世界卫生大会(WHA)周一作出决定,拒绝将中国台湾地区以观察员身份参会的提案列入大会议程。The decision was made by both the general committee and the plenary session of the 78th WHA.该决定是由第七十八届世界卫生大会总务委员会和全体会议审议通过。This is the ninth consecutive year that the global health agency has rejected such a proposal.这标志着世卫大会连续第九年对类似提案说“不”。Speaking at the first plenary meeting of the assembly, Chen Xu, permanent representative of China to the United Nations Office at Geneva and other international organizations in Switzerland, said the proposal blatantly challenges the authority of the UN and the postwar international order.中国常驻联合国日内瓦办事处和瑞士其他国际组织代表陈旭在大会全会发言时严正指出,该提案公然挑战联合国权威,践踏战后国际秩序根基。"This year marks the 80th anniversary of the victory of the World Anti-Fascist War and the 80th anniversary of the recovery of Taiwan province," he said.他说:“今年正值世界反法西斯战争胜利80周年,也是台湾省恢复的80周年。”"Taiwan province's return to China is an integral part of the outcomes of the victory in World War II and the postwar international order. The UN General Assembly Resolution 2758 and World Health Assembly Resolution 25.1 have long since resolved the issue of China's representation, including that of the Taiwan province."“中国台湾省回归祖国怀抱,是第二次世界大战胜利成果和战后国际秩序的重要组成部分。联合国大会第2758号决议与世界卫生大会第25.1号决议已从法理和程序上彻底解决了包括中国台湾省在内的中国代表权问题。”"For many consecutive years, the WHA has rejected such a proposal, thereby upholding the authority of the UN and the postwar international order. The fact clearly shows that the path of 'Taiwan independence' is a dead end and is once again doomed to fail."“世卫大会连续多年拒绝涉台提案,有力维护了联合国宪章权威和战后国际秩序。这一事实充分证明‘台独'分裂行径注定失败,任何政治操弄终将徒劳无功。”发言人强调。Chen also pointed out that it is the separatist activities pursued by the Democratic Progressive Party authorities of Taiwan province in recent years that have eliminated the political foundation for Taiwan province's participation in the assembly, and that, under the one-China principle, the engagement of Taiwan province with the WHO is not subject to any difficulties or obstacles.发言人陈旭特别指出,正是台湾省民进党当局近年来持续进行的谋“独”行径,彻底破坏了台湾省参与世卫大会的政治基础。根据一个中国原则,台湾省在卫生领域的国际参与不存在任何障碍。According to the Chinese Foreign Ministry spokesperson's remarks in response to the 78th WHA's rejection of the province-related proposal, the Chinese central government attaches great importance to the health and well-being of the compatriots in Taiwan province, and has made proper arrangements for the province's engagement in global health affairs under the one-China principle.中国外交部发言人就第78届世界卫生大会拒绝涉台提案作出回应时表示,中央政府始终高度重视台湾同胞的健康福祉,在一个中国原则下已为台湾地区参与全球卫生事务作出妥善安排。 The spokesperson said the central government has approved the participation in WHO technical activities by 11 batches of 12 health experts from Taiwan province over the past year, and under the framework of the International Health Regulations, Taiwan province can promptly access health emergencies information from the WHO and report such information to it.发言人指出,过去一年来中央政府已批准台湾地区11批次共12名卫生专家参与世卫组织技术活动。根据《国际卫生条例》框架安排,台湾地区可及时从世卫组织获取卫生应急信息,并向该组织通报相关情况。 Chen also underlined the above points in his remarks during the assembly, saying, "The proposal hyping up about the so-called gap in the international pandemic prevention system is completely inconsistent with the facts."中国代表陈旭在世卫大会发言中再次强调上述立场,明确指出:“炒作所谓国际防疫体系存在空白的提案完全不符合事实。”"The one-China principle is a consensus of the international community," Chen added. "To date, 183 countries have already established diplomatic relations with China on the basis of the one-China principle."“一个中国原则是国际社会普遍共识,”陈代表补充道,“目前已有183个国家基于这一原则与中国建立外交关系。”The 78th WHA, themed "One World for Health", runs through May 27. Attended by delegations from all 194 WHO member states, it brings together high-level country representatives and other stakeholders to address global health challenges.以“同一个世界,共护健康”为主题的第78届世界卫生大会将持续至5月27日。本届大会汇聚世卫组织全部194个成员国的代表团,汇集各国卫生部长级官员及利益攸关方代表,共同应对全球公共卫生挑战。proposal/prəˈpəʊzl/n.提议; 动议;the World Health Organization世界卫生组织plenary session全体会议;大会;全会hype up使兴奋; 煽动; 大肆宣传beinconsistent with与......不符stakeholder/ˈsteɪkhəʊldə(r)/n. 利益相关者
Hii leo jaridani tunakuletea mada kwa kina inayomulika wafugaji wa nyuki nchini Tanzania leo ikiwa ni siku ya nyuki duniani iliyopitishwa na Baraza Kuu la Umoja wa Mataifa tarehe 20 Desemba mwaka 2017 kupitia azimio namba A/72/211.Baada ya miaka mitatu ya majadiliano, leo katika Mkutano wa Kimataifa wa Afya unaoendelea Geneva, Uswisi, nchi zimepitisha rasmi makubaliano ya kihistoria ya kuzuia, kujiandaa, na kukabiliana vyema na majanga ya magonjwa kwa siku zijazo. Taratibu zote zitakapopitishwa na angalau nchi 60, mkataba utaanza kutumika rasmi mwakani.Kutokana na mamlaka za Israeli kulegeza kwa muda mzingiro uliodumu kwa wiki 11, angalau sasa matumaini kidogo yamerejea Gaza, yameeleza leo mashirika ya Umoja wa Mataifa. Msemaji wa Shirika la Umoja wa Mataifa la Masuala ya dharura, OCHA, Jens Laerke amesema tayari wamepata ruhusa ya kuyavusha malori matano yaliyokuwa yamezuiliwa jana.Shirika la Umoja wa Mataifa la Mpango wa Chakula Duniani, WFP limetahadharisha leo kwamba bila ufadhili zaidi, huenda katika muda wa wiki chache zijazo likalazimika kusitisha msaada wa chakula kwa takribani nusu ya watu linaowahudumia kwa sasa mashariki mwa Jamhuri ya Kidemokrasia ya Congo, DRC.Na katika mashinani, fursa ni yake Hanan Al-Dayya, Mkimbizi wa ndani katika ukanda wa Gaza, eneo la Palestina linalokaliwa na Israeli, akisimulia madhara ya mashambulizi yanayoendelea kutoka Israeli.Mwenyeji wako ni Flora Nducha, karibu!
We are excited to be joined by ANDERS HEDBERG, the very first player in professional hockey history to score 50 goals in less than 50 games – scoring 51 goals in just 49 games while a member of the Winnipeg Jets back in their WHA days... Along with countryman Ulf Nilsson, he is one of the first Swedish players to make their way to North America where he starred for both the Winnipeg Jets and the New York Rangers - averaging over a point a game during his 11 years with both teams – Please welcome our friend, and a Swedish hockey legend– Anders Hedberg !!See omnystudio.com/listener for privacy information.
**龔明鑫:落實特別條例 經濟成長率有望逾3% **美沙晶片交易是全球AI分水嶺 中東將比中國具優勢 **黃仁勳專機抵台!親曝:台灣仍將是全球科技生態中心 **歐洲若要完全不靠美軍 得花25年外帶30兆經費 **我未受邀世界衛生大會 中國外交部:中方不同意台灣參加今年WHA **中國抖音直播主狂拍小學生!不只台北民生社區 一查赫見台灣到處拍 **20多位我國藝人涉統戰遭鎖定、名單將曝光? 陸委會:未來將視調查結果評估是否公布並依法處理 **翁曉玲喊「台灣不該提升國防預算」矢板明夫:讓國際社會產生誤解 **驚!爆世壯運手錶用中國晶片 許淑華轟蔣萬安幫洗產地:台灣顏面盡失 —— 【青鳥到大罷免─賴清德執政周年檢視與展望】 時間│2025 年 5 月 17 日(六)09:30-17:30 地點│台獨聯盟辦公室 (台北市青島西路11號14樓之一) 主辦│台灣獨立建國聯盟、台灣安保協會、現代文化基金會 報名連結:https://reurl.cc/dQ8qz2 ❤️歡迎訂閱、收看、收聽,按讚、分享 【版權屬寶島聯播網所有,未經授權,不得轉載、重製,有需求請來信告知】 #寶島聯播網 #鄭弘儀 #寶島全世界 #川普 #黃仁勳 #翁曉玲 #世界衛生大會 小額贊助支持本節目: https://open.firstory.me/user/clw4248xv113d01wg7s4h2xnq 留言告訴我你對這一集的想法: https://open.firstory.me/user/clw4248xv113d01wg7s4h2xnq/comments Powered by Firstory Hosting
दुनिया को कोविड-19 जैसी महामारियों का मुक़ाबला करने के लिए तैयार करने के उद्देश्य से, हाल ही में देशों के बीच एक वैश्विक महामारी तैयारी समझौते के मसौदे को, अन्तिम रूप दिया गया था. इसे सोमवार 19 मई से आयोजित हो रही विश्व स्वास्थ्य सभा में पारित किए जाने के लिए प्रस्तुत किया जाएगा. भारत में संयुक्त राष्ट्र की स्वेच्छा सेवा संस्था UNV के UNDP में कार्यरत यूथ मोबिलाइज़र निखिल गुप्ता ने, कोविड -19 महामारी के दौरान एक स्वयंसेवक के रूप में व्यापक स्तर पर काम किया था. यूएन न्यूज़ की अंशु शर्मा ने उनसे कोविड - 19 के दौरान उनके कामकाज, उससे मिले सबक़ और विश्व स्वास्थ्य सभा से जुड़ी उम्मीदों पर बातचीत की...
As the World Hockey Association approached its first season, things were getting tense with the NHL. Chicago Blackhawks owner Bill Wirtz described the battle thus: “The war with the WHA was the third-bloodiest war in history, behind the Civil War and the Peloponnesian War.” The WHA was more worried about having arenas where the ice didn't have hills. If you’d like more Sports Bizarre, become a member of Bizarre Plus. Click here to join today As a member, you’ll get: A weekly bonus podcast Access to all past episodes Exclusive behind-the-scenes access Access to the members-only chatroom Ability to vote on future episodes Early access to any live show tickets See omnystudio.com/listener for privacy information.
This episode breaks down the ripple effect of policy pivots and political posts on the freight and logistics industry. Just as the market showed signs of recovery, renewed tariff talks sent shockwaves through supply chains, shifting forecasts, freezing shipper strategies, and rewriting playbooks overnight. We explore how a single post on Truth Social can spark economic uncertainty and dive into China's undeniable role in the North American logistics ecosystem.If you're a freight leader, 3PL rep, or logistics strategist trying to make sense of the chaos, this is your frontline briefing. Get the insight you need to adapt fast, stay profitable, and turn disruption into opportunity.Key Topics Covered:Why the freight rebound was gaining steam—and what halted itHow social media is becoming a new economic disruptorThe real cost of tariff uncertainty for shippers and carriersWhat you must factor in now when building resilient supply chainsTune in, and turn volatility into your competitive advantage.
Rail Tariff ShockwaveThis episode examines the ripple effects of Trump-era tariffs and how they've reignited chaos in America's rail sector. Dean and Dan explain what this means for brokers, shippers, and everyday operations, from sudden rate hikes and import bottlenecks to the scramble for capacity. With supply chains straining under new global tension, this isn't just a rate spike—it's a logistics shockwave. Get ready to hear the unfiltered truth, the data behind the headlines, and what you must do now to protect your margin and clients.
Wives get to thinking about how life is too short.Based on the works of CoyoteHoward. Listen to the Podcast at Steamy Stories. Jenny & The Barbeque GatheringIt was the picture of Americana in southwest Idaho.A partly cloudy sky, with more sun than shade. Deep green grass. Horses munching away in the pasture while the kids, whose ages ranged from 2-16, played on the trampoline and playset.The husbands primarily were under the porch overhang, gathered around the grill, while Osvaldo and his 8 year old son Elliot jokingly played corn-hole in the grass.Their wives were on the furniture on the other end of the porch, doing as women do, keeping an eye on the children for the most part and enjoying their own trials and tribulations. Most of which focused on family dramas, future plans and prices for various groceries."Yeah, so what I'd like to do," Brady said, beginning to flip the burgers from the top left, "is kinda what you did, but I'd like to do 4 rails instead."Steve nodded and took a drink of beer from his Payette Brewing Co. bottle. He absentmindedly watched Brady do so, his left thumb tucked into the front pocket of his jeans, shifting his cowboy booted feet to equal distribution instead of one leg being cocked slightly. His slight belly showed his 36 years of age, and while he didn't like it, and wished he could find the consistent motivation to work out, his wife didn't mind, and his shirts still fit, including the plain white t-shirt he wore now."Yeah I don't mind the three, but the three inch- I wish I'd of been able to afford the three and a half," Steve said, shifting the bottle to his left and adjusting his multicam hat on his head, though it needn't be done. His brown, fade cut hair wasn't bothering him, it was more just a habit."You did your fence yourself?" Jeff asked. He was blond, worked out tons and was wearing a polo, cargo shorts and flip flops.Steve nodded, "Yeah the little mustang got out suddenly last year, little shit."The women meanwhile were discussing flowers."I'm so jealous of your little play area Jenny," Hannah said, taking a sip of her soda.She was married to Brady, and three of the tikes running around were hers. She was 36, was 5'7" and 133 pounds. She knew she was attractive, as all the women here were, but her husband appreciated her the most, and that's exactly the way she prefered it.They'd been married for well over 10 years, he was the father of all her babies, and they led a great life."Well it's been a lot of work, but yeah, it's coming together," Jenny said. "We've done a ton of work just to try and keep the weeds away." Her husband was Steve, and as she finished her sentence she looked over at her man.They'd been together the longest of the group of six couples, having been dating since junior year of high school, over 18 years prior. They had the second oldest child there, at 15, and the second youngest as well, a three year old girl.They'd been the ones to leave though, he going into the Army right after high school and finally leaving six years prior, and they'd all reconnected.Steve was still her king though, and she his queen, as they routinely told each other. Even now, as Heather, a half-asian, half-hispanic woman asked her about the newest berry they'd planted Jenny couldn't help but think about what her king had done to her last night, and her panties got warm under her flowery, blue, spaghetti-strapped sundress.Steve noticed her looking at him, and flashed her a smile, giving his queen a fun wink.And that's why she couldn't help but love him. He just did those little kinds of things that other men didn't with their wives. Sure he had a temper, he played video games, his memory was horrible.But his positives more than made up for it."I'd like to plant blackberries, especially if they have uh, no thorns," Amanda winked, and took a bite of potato salad. She was a short, slightly heavy black haired woman married to Osvaldo.She looked over and saw her son and husband playing cornhole still, though Jeff and Joe had gone over to play with them. They were married to Heather and Ellen, respectively, to Amanda's left."Yeah me too," Hannah said, to which the others laughed slightly."Bullshit," Kelly said, deciphering the code words; "You have too much going on already. Brady would strangle you!""Oh he'd be a little upset, but he always cools off," Hannah said, chuckling.But Jenny couldn't get the thought out of her mind now. The thought of how Steve had taken extra care to put the baby to bed, to not play Mass Effect, and to take her to bed.He'd sweetly pulled her jeans off, then nuzzled and licked at her cunt through her panties until she'd cum, THEN he had proceeded to have his way with her, bringing her off several more times before finishing off inside her.She imagined she could still feel his cum, making her wetter still.She suddenly looked at the whole situation. At everyone around her and the thought of them getting old, tired, and ending..."Hannah, watch Claire for me. I'm gonna go get fucked silly in your powder room," she said, locking eyes with her friend and rising with a slight smirk.Hannah's eyes went wide as she choked slightly and let out a huge smile."What?!" she exclaimed, but Jenny was already striding across the patio to her man."Did she just-""What did she say?""Whoa!""Hahaha! Oh shit she's really doing it!"Jenny had reached Steve, grabbed him by the belt buckle with one hand and had begun leading him away, walking forward as if leading a stud to a mare."Hey babe, whoa, what's up?" he asked.She turned and smirked a small smile at him, and she knew it achieved the desired affect. Her intentions must have been written all over her face, because he couldn't help but put his beer down and follow, his own smile bursting forth.She lead him through the door and didn't give him time to properly shut it, but he was able to with a strong hand."Jen, what are you doing?" Steve asked, grabbing her wrist. She was closer to her target though."I need you," she said, suddenly breathless as she kissed him deeply, her sexy body pressing up against his.She made sure to press her bra'ed 34C breasts into his chest, her left hand around his back, her right up in his short hair.Steve's hands went around her pinched waist first, then his left up her side and back while his right went around and down to her plump ass, cupping and kneading.She moaned at the touches, then broke the french kiss and backed away towards the half-bath by the front door.Steve followed eagerly and suddenly they were in the little bathroom, finding the light and locking the door behind them."Hun, what's gotten int-ohh shit!" Steve started, but she hushed him by immediately dropping to her knees, and getting his jeans undone."Damn girl, the fuck has gotten into you suddenly?" he asked, as she got the front of his pants open, not pausing and pulled down his underwear too. But his hands went to her head, lightly rubbing the sides and back encouragingly."Can't I just want my husband?" she asked before throating his semi-hard, 6 inch cock in one go."Ah fuck," he said, his biology taking over for a moment as he thrust his hips an inch forward, his hands tightening on her head.Her tongue was going crazy on the underside of his shaft, the tip even coming past her bottom lip slightly to lick his balls as much as she could, and he got rigid hard in moments.He gasped and breathed as if he were in pain, but she knew he wasn't. Jenny didn't give him head very often, so this must be a real treat for him. Though truth be told, this was a means to an end. She bobbed her face on his crotch for a dozen or so pumps, until she felt his cockhead nudge the back of her throat. That end was now.She rose, looked him in the eye as her right hand grasped his hard prick, some of her hair in her eye as she did so, stroking it in short strokes as she turned to the vanity and mirror.God she looked slutty. One of her spaghetti straps had fallen off her shoulder and her lips were an excited red from having just been stretched in an obscene 'O' around his magnificent cock.But she could still FEEL her sex drive though, his taste still in her mouth. Her boobs were hypersensitive in their confines, feeling wonderfully constrained as she breathed, and her panties were probably soaked through.She pulled up the hem of her dress and bent over the counter, looking back at him over her right shoulder."God, just fuck me. Fuck me!" she said, "I need it."Steve couldn't refuse this personification of pure lust in front of him. She wasn't his wife in this moment. She was a bitch in heat. A mare in season. And he was going to give her the beast she needed.He grabbed her brief-cut panties with both hands and yanked them down with animalistic urgency to her feet, where she stepped out with one sandaled foot.He then rose and put his right hand to her cunt, immediately confirming how wanton she was by the heat and wetness he found there, easily one of the wettest times he'd ever seen her."Oh fuck," she said, finding her own lustful gaze looking back at her in the vanity mirror, feeling his fingers run through her sex from her clit(which he brushed ever so slightly) right up to her asshole. She knew he must've thought about playing with it, as she'd let him take her ass several times in the past year, finally.But he didn't linger, instead he stepped right up to her bent over body and slid his steel hard cock into her cunt, all in one go."Oh! Oh fuck! Oh god that feels soo good!" she practically screamed, but huskily.His hands went to her wide hips, finding her pelvic bones that made the perfect obscene handles, beginning to piston her cunt, slowly.But she wanted more, she wanted to be fucked, and fucked well.She looked over her shoulder at him, "Steve, god damnit, Fuck me!" With each stressed word she pushed herself back on his cock, sparks flying from her sopping cunt through her body as she did so as his rod plowed her depths.Out at the patio, the ladies' conversation suddenly halted when the screams and moans were faintly heard coming from the little vent, high on the side of the house. It piped the narrative from the powder room, just on the other side of the brick exterior. First Claire took notice, then all the ladies went silent, their devilish grins showing their vicarious delight. A couple of the guys noticed the silence over at the other end of the covered patio, then all the guys heard the faint echo of a raging hormonal woman's voice could just barely be heard yelling; “Steve, God damnit. Fuck me!”Jenny was rewarded with her stud pulling her hips back so that she'd fall backwards if he wasn't there, cock lodged inside her. Her hands were wrapped tightly around the spout of the faucet, now somewhat in front of her as her hair swung with his thrusts. Her tits were swaying as much as her bra would allow, and the pulling on her chest added to her sexual experience. The thumb of her left hand subconsciously rubbed the underside of the chrome spout, but in her entranced state, she imagined it was Steve's turgid cock.In moments he was fucking her hard. Fast. Making her ass jiggle with every impact of his pelvis. She felt his cock running though her with abandon, the heat from her cunt quickly turning into a fire, then a blaze, until stars burst in her vision and she screamed a carnal, drawn out "ah" in orgasm, her legs shaking uncontrollably. “Steve, you beast!” she screamed in satisfaction.Her hands slipped as they clenched and gripped the sink, Steve stepping up as her hips were pushed forward against the edge of the counter.Whereas moments before she'd cum from her assertive pushing back, now she was trapped with nowhere to go. More precisely, her hole couldn't get away from the prick fucking it.Jenny realized that she'd be forced to cum at least again, maybe more even. Her king had slowed as he'd trapped her, bringing his hands up to her shoulders and finding new grips with which to pound her.She looked up and saw her sweaty self in the mirror again, her jaw dropped open as she breathed heavy with sexual arousal, her whole body jarring with each impact of Steve's hips against her ass.God she was so sexy, and her cunt was doing such a good job of clenching around the invader, her body doing as it was designed to do, trying to bring the penis inside it to orgasm. Her hole wanted his semen. That was its purpose, to get fucked and filled by cum, so she could carry his child.And it was working, her own voice raising with every fourth or fifth quickening thrust as she felt her second orgasm building in her depths, Steve's cock hitting amazing pockets of nerves inside her.It suddenly was upon her as her left hand pressed against the mirror, her right coming around to grab Steve's hip as her cunt exploded in pleasure, her eyes wide. She rocked herself back as he tried to pull out for another thrust, trying to keep him inside her as she came, throwing her head in an out of control nodding motion and half panting, half exclaiming "ahs.”Steve for his part wasn't faring well on holding out. He regularly told Jenny that her orgasms would collect massive amounts of cash on the internet, and they usually brought him off. But Jenny had never been this needy before, and though she did have bouts of increased sexual activity, this was a whole new level.As she came again for the second time, the thrashing of her head, her hair flying and her hand on the mirror, almost got him.It was her hand landing on his right side, hip and ass cheek coupled with her rocking cunt clenching on his shaft that got him. He slammed forward to the hilt as his cum rose from his balls, rocketing down his weapon until it fired into her hot sheathe.Again and again it fired, "Oh yeah! Uh! Uh! Uh! Take it baby!" he said through blurred vision and clenched teeth.Out on the patio, the ladies were squirming; embarrassed, but getting aroused. Claire was frustrated when she had to go comfort a child who tripped and fell in the play area; “Tell me what I'm missing, Kelly.”
Wha you need to know!
Wha???See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Are we possibly shifting towards a different resolution of the truckload market's ups and downs? Today, we will compare the cost of Spot Market Intermodal and Spot Market OTR rates throughout North America. The contrast and the similarity will shock you.
Let's go deeper into tariffs, rhetoric, and imports. We'll explore the push-pull scenario and how it could further tumult the logistics and supply chain world. Is there a bright side on the horizon, or will we have to deal with these challenges for years to come? This raises the question: where is all this going? How will it impact us in the long term, and what does the data tell us that can help us stay ahead of these rapidly shifting trends and industry changes?
WHA???See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Here's some research on what Tariffs can/will do to the North-American Economy. IS THIS AN OPPORTUNITY for those who pivot? Only time will tell! ## Tariffs SummaryOverview of TariffsA 25% tariff will be imposed on all Canadian imports starting March 4, 2025, except for energy products, which will face a 10% tariff.Impacts on the U.S. Economy:- Inflation: The tariffs could raise U.S. consumer prices by 0.5%-0.7%, pushing inflation closer to 3% by late 2025.- Economic Growth: Estimated to reduce U.S. GDP growth by 0.6%-1% over the next year, with risks of stagnation if tariffs persist.- Stagflation Risk: The combination of higher prices and slower growth may disproportionately affect low-income households.- Job Losses: Tariffs may result in job losses in sectors reliant on imports or exports.Impacts on Canadian Businesses:- Export Challenges: Increased costs for U.S. consumers could reduce demand for Canadian goods.- Energy Sector: The 10% tariff on energy products may still make this sector relatively less affected compared to others.Impacts on Low-Income Consumers:- Higher Costs for Essentials: Low-income households are disproportionately affected by increased prices on necessities like food and clothing.- Reduced Purchasing Power: Tariffs might lower incomes for the poorest by up to 4%, compared to 2% for wealthier groups.- Fewer Choices: Higher import costs may reduce product variety, forcing consumers to settle for lower-quality alternatives.Long-Term Risks:- Market Inefficiencies: Diversion to higher-cost suppliers and reduced global competitiveness of exports.- Weakened Trade Relationships: Retaliatory tariffs could further strain trade dynamics and investment.- Economic Stability: Prolonged tariffs risk undermining overall economic stability and growth.Logistics Industry SummaryThe newly imposed tariffs on Canadian imports are set to profoundly impact the logistics industry, especially trucking. Increased costs for transporting goods across borders will likely lead to higher freight rates and additional financial strain on trucking companies. Customs delays and complex documentation requirements will further disrupt operations, creating inefficiencies in cross-border logistics.Trucking firms may need to adopt advanced technologies, such as real-time tracking and automated customs processing, to mitigate these challenges. Moreover, smaller logistics providers with limited resources will face heightened difficulties in adapting to these changes, potentially leading to market consolidation as larger firms absorb smaller ones to maintain competitiveness.Overall, the tariffs will compel the logistics and trucking sectors to reevaluate strategies, invest in technology, and diversify routes to sustain operations in the face of increased operational complexities.
Podcast Script: Why Business Leaders Should Be Coaches, Not CaptainsIntro Music Fades InHost: Welcome to BusinessIsGood, the podcast where we explore the ideas and practices that help entrepreneurs grow their businesses and create lasting success. I'm your host, Chris Cooper. Today, we're tackling a big question: should you lead your business as a “captain” or as a “coach”?To illustrate this, I want to start with a story from hockey. Bobby Hull, nicknamed “The Golden Jet,” was one of the greatest players to ever lace up skates. Known for his blazing speed and powerful slap shot, he dominated as a player in both the NHL and WHA.But Hull also took on a rare challenge: he tried to be both a player and a coach at the same time while leading the Winnipeg Jets in the WHA during the early 1970s. He had incredible success as a player and later achieved even greater success as a coach, but his tenure as both didn't work out the way he—or the Jets—had hoped.Segment 1: The Player-Coach DilemmaBobby Hull's time as a player-coach highlights an important leadership lesson: you can't do both jobs effectively at the same time. As a player, your focus is on performance—executing plays, scoring goals, and being in the action. But as a coach, your role is to oversee the big picture, strategize, and make tough decisions to guide the team to success.Even some of the most celebrated names in hockey, like Larry Robinson, achieved greatness as both players and coaches—but never at the same time. Why? Because these are two fundamentally different roles that require completely different mindsets and skill sets.Segment 2: The Captain vs. Coach Paradigm in BusinessThis same distinction applies in business. Many entrepreneurs try to lead as captains when they really need to be coaches.Let's break this down:Limited Perspective on the Ice:When you're in the trenches with your team, you can only see what's directly in front of you. You don't have the big-picture context that a coach has from the bench. In business, this means getting too caught up in day-to-day operations and losing sight of long-term strategy.Emotional Proximity:As a captain, you're shoulder-to-shoulder with your team. This camaraderie can make it hard to make tough decisions—like moving someone to a different role or cutting an underperformer. A coach, however, has the necessary distance to prioritize what's best for the organization as a whole.Distraction by Small Tasks:Captains are busy tying their skates, taping their sticks, and focusing on their personal performance. Coaches are busy drawing up game plans, scouting opponents, and thinking about how to improve the team. In business, staying stuck in “captain mode” means you spend too much time on the wrong things—handling tasks that someone else could do instead of focusing on growth and vision.Segment 3: Why Being a Coach Wins in BusinessHere's the truth: real leadership isn't about scoring the most goals. It's about enabling your team to win.As a coach, your job is to:Make hard decisions that benefit the whole organization.Delegate tasks and trust others to execute them.Hold your team accountable and provide constructive feedback.Focus on strategy, vision, and the next big opportunity.Many entrepreneurs default to being captains because it's what they know—it's comfortable. They're great at doing the work, but they shy away from the harder, more abstract job of coaching. But this mindset limits growth. Your business can't scale if you're always on the ice.Think about it: players are replaceable. You can hire someone else to score goals. What you...
Wha's Something Young People Can't Do That All Of Us Over 30 Easily Know by Maine's Coast 93.1
Wha's Something Young People Can't Do That All Of Us Over 30 Easily Know by Maine's Coast 93.1
Wha's Something Young People Can't Do That All Of Us Over 30 Easily Know by Maine's Coast 93.1
Wha's Something Young People Can't Do That All of Us Over 30 Easily Know by Maine's Coast 93.1
Wha's Something Young People Can't Do That All Of Us Over 30 Easily Know by Maine's Coast 93.1
Jones and Keefe got in the Red Sox offseason and the comments from President Sam Kennedy on his expectations for the upcoming season. The guys went on to recap Wild Card Weekend in the NFL during Wha' Happened?. Finally, Jones and Keefe discussed the chances of Deion Sanders becoming the next head coach of the Dallas Cowboys.
Welcome to 2025 everyone! Today, travel medicine specialists Drs. Paul Pottinger ("Germ") & Chris Sanford ("Worm") answer your travel health questions:What's the differences between vaccines that are routine, recommended and required?What is Pertussis, and how can I avoid it?What can the US embassy do for me when I'm abroad?African Sleeping Sickness… what's the dealTrekking Poles: Help or Hindrance?Isn't it just safer to stay home?Wha't the best layover duration from an anxiety perspective?What's mycoplasma pneumonia?We hope you enjoy this podcast! If so, please follow us on the socials @germ.and.worm, subscribe to our RSS feed and share with your friends! We would so appreciate your rating and review to help us grow our audience. And, please send us your questions and travel health anecdotes: germandworm@gmail.com.Our Disclaimer: The Germ and Worm Podcast is designed to inform, inspire, and entertain. However, this podcast does NOT establish a doctor-patient relationship, and it should NOT replace your conversation with a qualified healthcare professional. Please see one before your next adventure. The opinions in this podcast are Dr. Sanford's & Dr. Pottinger's alone, and do not necessarily represent the opinions of the University of Washington or UW Medicine.
S3E97 Iconic Edinburgh poet Robert Fergusson is the subject of today's podcast, as Ash looks at his breakthrough poem, 'The Daft Days': Now mirk December's dowie face Glowrs owr the rigs wi sour grimace, While, thro' his minimum of space, The bleer-ey'd sun, Wi blinkin light and stealing pace, His race doth run. From naked groves nae birdie sings, To shepherd's pipe nae hillock rings, The breeze nae od'rous flavour brings From Borean cave, And dwyning nature droops her wings, Wi visage grave. Mankind but scanty pleasure glean Frae snawy hill or barren plain, Whan winter, ‘midst his nipping train, Wi frozen spear, Sends drift owr a' his bleak domain, And guides the weir. Auld Reikie! thou'rt the canty hole, A bield for many caldrife soul, Wha snugly at thine ingle loll, Baith warm and couth, While round they gar the bicker roll To weet their mouth. When merry Yule-day comes, I trou, You'll scantlins find a hungry mou; Sma are our cares, our stamacks fou O' gusty gear, And kickshaws, strangers to our view, Sin fairn-year. Ye browster wives, now busk ye braw, And fling your sorrows far awa; Then come and gie's the tither blaw Of reaming ale, Mair precious than the well of Spa, Our hearts to heal. Then, tho' at odds wi a' the warl', Amang oursels we'll never quarrel; Tho' Discord gie a canker'd snarl To spoil our glee, As lang's there's pith into the barrel We'll drink and ‘gree. Fidlers, your pins in temper fix, And roset weel your fiddle-sticks; But banish vile Italian tricks Frae out your quorum, Not fortes wi pianos mix – Gie's Tulloch Gorum. For nought can cheer the heart sae weel As can a canty Highland reel; It even vivifies the heel To skip and dance: Lifeless is he wha canna feel Its influence. Let mirth abound, let social cheer Invest the dawning of the year; Let blithesome innocence appear To crown our joy; Nor envy wi sarcastic sneer Our bliss destroy. And thou, great god of Aqua Vitae! Wha sways the empire of this city, When fou we're sometimes capernoity, Be thou prepar'd To hedge us frae that black banditti, The City Guard. Title Music: 'Not Drunk' by The Joy Drops. All other music by Epidemic Sound. @earreadthis earreadthis@gmail.com facebook.com/earreadthis
WHA???See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Jones and Keefe continued to talk about the Patriots new NFL Draft position. What can the Patriots do at #4? Before discussing the performance of quarterback Joe Milton III in yesterday's Patriots win, the guys took a look at some of the big moments from Week 18 in Wha' Happened?.
So the media is blaming far right white dudes for Isis... Jeff says "Wha?"
We recompose ourselves after a loss to Latvia and then a less than inspiring win over Germany at the World Junior Championships. What does the classic New Year's Eve game vs the USA have in store? We're concerned. We review some interesting stats from the calendar year 2024 and then give our thoughts on some novel ideas to improve the game. As always we have Guess the 5th and Connections! Listen Here: Apple Podcasts Direct MP3 iHeart Radio Title Player - Ryan Whitney - 481 GP News Dallas practice on boxing day - big no no. - $100,000 fine Latvia beats Canada! Spengler Cup - seems like fun Derek Lalonde out in Detroit, MacLellan in - 4th coaching change Crosby passes Lemieux for Assists with Penguins at 1034 Avalanche sign Blackwood for 5 years, $26.25MM Kaprizov hurt - Winter Classic - Chicago vs St. Louis - St. Louis has been in 2 of last 4 - why? 16 Winter Classics, only 2 Canadian teams Teams that have never played. Edmonton Oilers Calgary Flames Vancouver Canucks Winnipeg Jets Ottawa Senators Florida Panthers Tampa Bay Lightning New Jersey Devils Carolina Hurricanes Columbus Blue Jackets New York Islanders Colorado Avalanche Los Angeles Kings Anaheim Ducks • San Jose Sharks Guess the 5th Going Streakin' CHI 4L, NYR, SJS 3L - Rangers in 8th in Metro - 2-8-0 in last 10, 7 pts out of playoffs, with 5 teams ahead VGK 6W, COL 4W, BUF 3W NHL Stats for 2024 - see guide 27 3+ goal comeback wins EDM 16 game winning streak AZ move to Utah Ovechkin - 25 goals to Gretzky - RS+PO - Gretzky 1016, Ovi 942 (-74) 82 game average goals - both at 49 Combined PO + Reg including WHA Howe 1071G + 1518 =2589 Gretzky 1072G + 2297=3369 GP Gretzky - 1695 + 93 = 1788 Howe - 1924 + 497 = 2421 (633 more) Crosby - 600G, 1000A, 1600 PT, 1300 GP 19th PPG regular season - matching Gretzky record McDavid 900 / 1000PT Matthews - 69 G - highest since Lemieux in 95-96 McDavid and Kucherov - 4t, 5th players to get 100A in a season Most KM Skated - McDavid - 605km (378mi) Connections John Gee, Nora and Paul Stewart, David Huyben all gave answers. Ideas to Improve Hockey / NHL Vary rink sizes - length from 210' - 190' - Boston Arena, before 1924 was 220 x 90 but in 1924 was 200 x 80 feet when Bruins moved in. Width from 80-95 wide - Boston garden was 191 x 84 Chicago Stadium was 185 x 85 Buffalo - 196 x 85 Player benches across, centred so no “long change” 1934-40 - Penalty shot circle - 20' Diameter 38' from goal line 1943 - Red Line added - to speed up game and reduce offside Penalty shot for defensive zone stick infractions - hooking, tripping, slashing, cross check inside the house. Shoot from circle, puck live if a rebound. Players line up behind circles. No crosschecks - none Eliminate standing with puck behind net to reset - defensive team must move puck - can be side to side or backward, but can not stand still waiting for line changes Presentation - more conversations with players when prepping - similar to F1 - not scripted - just have reporter walking around, talking to equipment managers, maybe get a quick chat with a player if they happen to be standing around. Stats done by reg+po Ask for listener ideas - and rationale Crazy Stat DOPeS
It's our year-end Holiday Roundtable Spectacular - featuring a look back at the year's newest additions to "what used-to-be" in professional sports (RIP MLB's "Oakland" Athletics & the NHL's Arizona Coyotes), and a predictive glimpse into what might be in store for 2025 - with two of our favorite fellow defunct sports enthusiasts: Steve Holroyd (Crossecheck, Philly Classics & Episodes 92, 109, 149, 188 & 248); and Paul Reeths (OurSportsCentral.com, StatsCrew.com & Episode 46). Buckle up for our yearly mélange of amusement and bemusement at the fringes of the pro sports establishment, as we simultaneously marvel at and lament some of the most curious events of the past year, debate who and what might be next to stumble into oblivion, and conjecture about future scenarios for the next generation of defunct and otherwise forgotten pro sports teams and leagues - including: Spring football's unified UFL Arena Football League 2.0 RIP (and Arena Football One 2025) MLB's now-Sacramento-and-someday-Las Vegas (maybe) Athletics The NHL's Utah Hockey Club (fka Arizona [née Phoenix] Coyotes, via the WHA's original Winnipeg Jets) Major League Cricket Baseball's genre-bending Savannah Bananas - and its soon-to-launch Banana Ball Championship League Indoor soccer's new Baller League Premier League Lacrosse's pivot to city teams and a new women's division The new League One Volleyball (LOVB) takes on the 2nd-year Pro Volleyball Federation (PVF) NWSL soccer PWHL hockey PLUS: Can Diamond Baseball Holdings (41 MiLB teams and counting!) be stopped? AND: Will Michael Jordan, et al. break up the NASCAR stock car monopoly? + + + SUPPORT THE SHOW: Buy Us a Coffee: https://ko-fi.com/goodseatsstillavailable "Good Seats" Merch: https://www.teepublic.com/?ref_id=35106 SPONSOR THANKS (AND SUPPORT THE SHOW!): Royal Retros (10% off promo code: SEATS): https://www.503-sports.com?aff=2 Old School Shirts.com (10% off promo code: GOODSEATS) https://oldschoolshirts.com/goodseats Yinzylvania (20% off promo code: GOODSEATSSTILLAVAILABLE): https://yinzylvania.com/GOODSEATSSTILLAVAILABLE FIND AND FOLLOW: Website: https://goodseatsstillavailable.com/ Blue Sky: https://bsky.app/profile/goodseatsstillavailable.com Threads: https://www.threads.net/@goodseatsstillavailable X/Twitter: https://twitter.com/GoodSeatsStill Instagram: https://www.instagram.com/goodseatsstillavailable/ Facebook: https://www.facebook.com/GoodSeatsStillAvailable/ YouTube: https://www.youtube.com/@goodseatsstillavailable
How can businesses navigate the challenges of IT unpredictability and ensure operational continuity in an ever-evolving tech landscape? In today's episode of Tech Talks Daily, I'm joined by Geoff Hixon, VP of Solutions Architecture at Lakeside Software, to explore how data-driven strategies are reshaping IT resilience and recovery. Geoff shares his experiences in supporting Lakeside customers during the CrowdStrike global IT outage, including insights into the rapid recovery process for a global airline and a multinational oil and gas company. Geoff also provides an exclusive preview of Lakeside Software's highly anticipated IT Resilience report, offering valuable insights into how organizations can transition from reactive to proactive and eventually autonomous IT management. By focusing on real-time data collection and visibility, he highlights the importance of identifying issues before they escalate and shares how enhanced data insights can prevent costly errors—like a bank's multimillion-pound oversight caused by missing a simple cable requirement. Additionally, we discuss the role of AI in the journey toward autonomous IT, where routine support tasks are automated to free up IT teams for more strategic initiatives. Geoff illustrates how Lakeside's approach helps organizations build trust in automation through step-by-step implementation and testing, paving the way for self-healing IT systems. Tune in to discover how forward-thinking organizations can harness the power of data, automation, and proactive strategies to build IT systems that are not only resilient but also prepared for the challenges of tomorrow. Wha
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe
Jones and Keefe talked about the reports that Bill Belichick reached out to the New York Jets before agreeing to become the head coach at the University of North Carolina. Before recapping Wha' Happened? in Week 15 in the NFL, Patriots tight end Hunter Henry joined the show to discuss yesterday's loss in Arizona. Finally, Jones and Keefe discussed why the Patriots' players were given the day off today.
Keefe goes through each game and says "Wha' Happened.
Here's "Wha' Happened" this week in the NFL.
In hour 1, Chris talks about a headline out of DC that the city council voted to outlaw ALL right turns on Red, BUT since there's no money to advertise the change it won't be enforced. Wha? For more coverage on the issues that matter to you, download the WMAL app, visit WMAL.com or tune in love on WMAL-FM 105.9 from 9:00am-12:00pm Monday-Friday To join the conversation, check us out on X @WMAL and @ChrisPlanteShow Learn more about your ad choices. Visit podcastchoices.com/adchoices
Couldn't sit on the couch all day? Rich Keefe has you covered as he goes through each NFL game and tells you Wha' Happened.