POPULARITY
Jak moc Čechy při předvolebním rozhodování ovlivňuje způsob vedení politického boje? Na otázku odpovídá analytička Tereza Friedrichová z výzkumné agentury NMS. Agentura podle jejího posledního průzkumu uvádí, že pro 9 z 10 Čechů je etika v politických kampaních významným rozhodovacím faktorem – napříč sociodemografickými skupinami. „Liší se akorát míra důležitosti etiky a momenty, kdy jsou lidé ochotni z etických standardů ustoupit,“ dodává analytička.Všechny díly podcastu Jak to vidí... můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.
„To není žádná demokracie,“ prohlásil Andrej Babiš v rozhovoru pro Deník.cz v reakci na radikální celní politiku Donalda Trumpa. Také dodal, že má s americkým prezidentem společnou hlavně červenou barvu kampaně. Přitom ještě v listopadu minulého roku prohlásil, že de facto mají totožné programy. „Jsem přesvědčen, že Donald Trump je nejlepším řešením pro Evropu a také pro celý svět,“ pronesl na CNN Prima News. Před čtrnácti dny ovšem zveřejnila agentura NMS průzkum, z něhož vyplynulo, že Trumpovu politiku v Česku podporuje jen 20 procent lidí a celých 40 procent jako hlavního představitele trumpismu u nás vidí Andreje Babiše. Začal velký obrat a předstírání, že s Trumpem nemá on ani hnutí ANO vlastně nic společného. Zároveň zaregistroval, že mu volební preference nerostou, takže začala zákulisní jednání se Socdem o účasti některých špičkových politiků na kandidátce hnutí ANO. Babiš na to jde tak, že jakoby nechává jednání na krajských organizacích, aby pak mohl říct: je to rozhodnutí regionů. Jednání však stále ještě nejsou u konce, protože Socdem chce vytvořit s ANO formální koalici. Co může být černou labutí říjnových sněmovních voleb? Jak si Lubomír Zaorálek představuje akciový trh? A co je smyslem Trumpovy těkavé ekonomické politiky?
Today we're going to break down three life-threatening syndromes that every CRNA, anesthesia provider, and healthcare professional should know: Serotonin Syndrome, Neuroleptic Malignant Syndrome (NMS), and Malignant Hyperthermia (MH). Though these conditions are rare, their symptoms often overlap—making quick, accurate diagnosis and intervention critical. Garry and Terry take a deep dive into the causes, symptoms, physiology, and treatment protocols for each condition, while also sprinkling in their signature humor and real-world insights. Here's some of what we discuss in this episode:
Slovakia Today, English Language Current Affairs Programme from Slovak Radio
According to the latest data from research agency NMS, 16% of women in Slovakia suffer from period poverty. This term is desribed as lack of access to sanitary and menstrual products. We talked to Natália Blahová, a representative from the initiative Dôstojná menštruácia, who explains what this alarming problem means from women and how to fight it.
Slovakia Today, English Language Current Affairs Programme from Slovak Radio
According to the latest data from research agency NMS, 16% of women in Slovakia suffer from period poverty. This term is desribed as lack of access to sanitary and menstrual products. We talked to Natália Blahová, a representative from the initiative Dôstojná menštruácia, who explains what this alarming problem means from women and how to fight it.
In Episode 70, we invite back two Bid Out podcast veterans, Jim Toes, President of the Security Traders Association, and Jaret Seiberg, TD Cowen's Washington Research Group Financial Services Policy Expert, for a discussion on the next SEC Administration, likely to be led by Paul Atkins. Jim and Jaret start with an explanation for the low key, zero drama nature of the Atkins confirmation hearing for SEC Chair, citing the limited pushback expected on the nomination, and the fact that the confirmation process has been streamlined post the GFC. That said, Minority Ranking Member Elizabeth Warren published a 34-page letter of issues and questions for the Chair-Designate covering a wide range of current and historic SEC issues, including many about the time Atkins spent as an SEC Commissioner prior to the GFC. Jim and Jaret discuss several of the topics raised by Senator Warren, including potential conflicts for the Chair, gamification of markets, the future of FINRA and the CAT, and crypto oversight. The pod finishes with Jim's look into his crystal ball to answer the question "will Atkins reverse NMS during his tenure?"This podcast was recorded on April 2, 2025.Chapter Times:00:55 - The Zero Drama Atkins Hearing and Next Steps08:30 - Senator Warren's 34 Page History Lesson13:22 - The Future State of Completely Partisan Commissioners23:34 - Will Atkins Reverse Policy Decisions from Gensler Administration?31:10 - How Can Atkins Manage Conflicts Including with Trump?35:31 - 0DTE, Gamification, 24 Hour Trading and Crypto – Protecting Retail Investors For relevant disclosures, visit: tdsecurities.com/ca/en/legal#PodcastDisclosure. To learn more about TD Securities, visit us at tdsecurities.com or follow us on LinkedIn @tdsecurities.
In this powerful episode of Queer Storytime, we dive deep into the urgent and historical topic of mutual aid within the LGBTQIA+ community. From the Stonewall era to the HIV/AIDS crisis and today's escalating attacks on gender and sexually affirming health care, mutual aid has been a lifeline for queer and trans folks.Stevie lays out the distinction between charity and solidarity, highlighting why institutional approaches often fall short. As mainstream LGBTQIA+ organizations amass wealth, the most vulnerable in our community — Black and Brown trans folks, unhoused queer youth, disabled queer individuals, and those living under oppressive legal systems — are left behind.This episode announces a groundbreaking new initiative: A Mutual Aid Collective for LGBTQIA+ Health & Wellbeing. This network will provide access to holistic health resources, guidance, and sliding-scale services for queer and trans individuals, especially those living in states where affirming healthcare is under attack.Listen now to learn about our community's resilience, why we need to reclaim the true spirit of mutual aid, and how you can support this collective effort. Remember, our survival is the ultimate act of resistance.Key Topics Covered:The historical precedent of mutual aid within the LGBTQIA+ community.The distinction between charity and solidarity.Criticism of mainstream LGBTQIA+ organizations hoarding wealth.Introduction of the Mutual Aid Collective for LGBTQIA+ Health & Wellbeing.How listeners can support or access this new initiative.Calls for community-based, nonhierarchical, direct aid over bureaucratic charity models.Call to Action:✨ Get Involved: If you're in a position to contribute financially, share skills, or simply spread the word, support the Mutual Aid Collective by visiting here - https://opencollective.com/queer-trans-thriving
Now that the dust has settled it turns out that Assassin’s Creed Shadows is a really really good game. We also talk about some NMS new update news, Dun Awakening pricing, and a bit of WoW Hardcord DDOS drama. That and more on this episode of the New Overlords Podcast with Sema and @MaxTheGrey. MP3 … New Overlords Podcast 552: Assassin’s Creed Shadows Read More » The post New Overlords Podcast 552: Assassin's Creed Shadows first appeared on NEW OVERLORDS.
This week we talk about the NMS expedition and the new planet tech Hello Games has added. Then we catch up on what’s know and not known about Light No Fire, how the NMS stuff figures in, and what we hope for. That and more on this episode of the New Overlords Podcast with Sema … New Overlords Podcast 547: No Man’s Sky and Light No Fire Read More » The post New Overlords Podcast 547: No Man's Sky and Light No Fire first appeared on NEW OVERLORDS.
C3 Metals CEO Dan Symons joined Steve Darling from Proactive to announce the company's successful partnership with Nine Miles of Smiles in delivering essential dental care and oral hygiene education to the Bellas Gate community in rural Jamaica. This impactful initiative was made possible through a generous grant from the RCF Foundation, reinforcing C3 Metals' dedication to fostering long-term, sustainable wellbeing in the communities where it operates. This after a chance meeting with Canadian Sprinter Donovan Bailey. Symons emphasized that the outreach program reflects C3 Metals' broader vision of creating lasting value beyond its mineral exploration activities. By investing in sustainable opportunities and meaningful partnerships, the company aims to empower local communities in ways that endure well beyond its projects. With financial support from the RCF Foundation, C3 Metals partnered with NMS, a 100% volunteer-run and registered Canadian non-profit charity, to set up a mobile dental clinic in the Bellas Gate region. The initiative provided critical dental care and health screenings to 175 individuals. Beyond dental care, the initiative placed a strong emphasis on education and long-term impact. Grade 6 students were trained and certified as health advocates, equipping them to lead future oral hygiene efforts in their communities. Meanwhile, Grade 4 and 5 students engaged in hands-on projects exploring the health impacts of sugar, promoting healthier habits and critical thinking skills. Through this collaboration, C3 Metals and NMS have not only improved immediate access to dental care but also laid the groundwork for sustainable health education in rural Jamaica. This initiative serves as a testament to C3 Metals' commitment to responsible corporate citizenship and its dedication to enhancing community wellbeing beyond the mining sector. #proactiveinvestors #c3metalsinc #tsxv #cccm #otcqb #cuauf #Mining #CSR #Sustainability #DentalCare #NineMilesOfSmiles #CommunitySupport #OralHealth #Jamaica #MiningNews
Trans On The Road In The Deep SouthContent Warning:This episode includes discussions of sensitive topics, including sexual abuse, especially between time stamps 13:30-19:00. Listener discretion is advised.Description: In this heartfelt episode of Queer Story Time, we're joined by Terrance, a 71-year-old trans man, to explore what activism can look like at different stages of life. Terrance shares his unique perspective on creating "safe harbors" for others and the role this plays in fostering community and healing. His story is a reminder that activism isn't limited to marching in the streets—it can take many forms, all equally valid and impactful.We also dive into the importance of mindfulness, yoga, and other inward practices that allow us to break down societal norms and assumptions around gender and sexuality. By committing to personal growth, we can create positive change both within ourselves and in the world.Highlights of this episode include:Terrance's reflections on activism and why providing a safe space is his way of contributing.The need for all of us to do inward work to challenge societal expectations and heal from harmful norms.A discussion on the evolving nature of activism and holding space for queer and trans elders who've already fought many battles.A preview of upcoming episodes that will continue exploring conversion therapy and queer & trans healing.Big Announcements:Queer Story Time is transitioning off Meta platforms (Facebook, Instagram, Threads) and moving to decentralized social media like: Mastodon- @futuredrstevie@mastodon.socialPixelFed- @futuredoctorstevieLoops by PixelFed- @futuredoctorstevieCome get a FREE copy of “Your Complete Checklist to Achieving Optimal Health as an LGBTQIA+ Person” by joining the QST Newsletter/Mailing List Join the new Queer Story Time Community Hub on Patreon for early access to episodes, exclusive content, and monthly Zoom gatherings. Paid tiers start at just $5/month.Subscribe to the Queer Story Time YouTube channel for the new QST Reacts series, featuring Stevie's take on LGBTQIA+ topics in short, impactful videos.Ways to Support the Podcast:Subscribe and share this episode with friends and family.Follow us on decentralized social platforms and join the movement for safer, more inclusive online spaces.Tune in to Episode 21, where we continue the conversation on conversion therapy with two experts in the field. Until then, thank you for being part of the Queer Story Time community. Together, we're building a brave space for queer and trans stories to be heard.Connect with Your Host Stevie: QueerStorytimeThePodcast@gmail.com Leave A Star Rating, Written Review, & Follow QST Podcast: Stevie encourages QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally. Host: Stevie Inghram, M.S., YT, AWC, NMS-4 (She/They)Support this podcast at — https://redcircle.com/queer-story-time-the-podcast/donations
Hostia: Pavol Baboš (sociológ z Univerzity Komenského v Bratislave) a Mikuláš Hanes (vedúci kvalitatívneho výskumu v agentúre NMS ). | Piatkových protivládnych protestov sa zúčastnilo približne 100-tisíc ľudí vo viac ako tridsiatich mestách. Účastníci odmietajú úvahy o odklone Slovenska od Európskej únie a NATO, vyjadrili podporu Ruskom napadnutej Ukrajine a zaznela aj požiadavka na odstúpenie premiéra Roberta Fica. Nespokojnosť s prácou vládnej koalície preukázal aj aktuálny prieskum NMS, podľa ktorého sa až 64 % respondentov prikláňa k negatívnemu hodnoteniu. Podobnej nespokojnosti obyvateľov však čelí aj opozícia. Kadiaľ v súčasnosti vedú deliace čiary v spoločnosti? Rozdeľuje obyvateľov v prvom rade domáca politická situácia, alebo rezonujú aj iné témy – hodnotové, zahraničnopolitické a pod.? Kto a akým spôsobom sa podieľa na tomto štiepení? Ako sa súčasná situácia pretavuje do voličských preferencií politických strán? Zatiaľ u väčšiny subjektov zdanlivo nebadať zásadný posun preferencií v porovnaní s voľbami – spomaľuje vyhranenosť názorov fluktuáciu medzi tábormi? Narastá v spoločnosti letargia voči politike alebo sa spoločnosť naopak aktivizuje? Môže vývoj preferencií zdynamizovať rétorika o predčasných voľbách? Ktoré realizované politiky vnímajú voliči pozitívne a negatívne? | Spoločnosť rozdeľuje politická situácia. | Moderuje: Matej Baránek; | Diskusiu Z prvej ruky pripravuje Slovenský rozhlas, Rádio Slovensko, SRo1. Vysielame každý pracovný deň o 12:30 v Rádiu Slovensko.
*Podporte podcast Dobré ráno v aplikácii Toldo na sme.sk/extradobrerano. Luxusná dovolenka, návšteva u Putina aj spor s Ukrajinou o tranzit plyn dokázali čosi nové: spojili opozíciu, a to vrátane hnutia Slovensko Igora Matoviča či Demokratov. Do toho sa objavil prieskum od NMS, ktorý naznačuje, že opozícia by dokázala poskladať koalíciu - ale tiež s Matovičom. Aké sú tam teda vzťahy a prečo? Tomáš Prokopčák sa v podcaste Dobré ráno pýta Petra Tkačenka. Zdroj zvukov: TA3, Facebook/Igor Matovič Odporúčanie: Dnes odporúčam text nášho kolegu Jána Krempaského Netúžila po pozornosti, hoci nás všetkých prevyšovala. Jedna z obetí štvrtkového útoku na gymnáziu v Spišskej Starej Vsi Mária Semančíková, zástupkyňa riaditeľky bola totiž Jankova spolužiačka. Vo veľmi úprimnom, osobnom a emotívnom texte tak spomína na to, akým bola človekom. – Všetky podcasty denníka SME nájdete na sme.sk/podcasty – Odoberajte aj audio verziu denného newslettra SME.sk s najdôležitejšími správami na sme.sk/brifing
In this episode of Queer Story Time, Prince Manvendra Singh Gohil and HH Prince DeAndre, Duke of Hanumanteshwar, share their personal journeys as passionate advocates for LGBTQ+ rights. They discuss their struggles with conversion therapy, the fight for marriage equality in India, and the creation of an LGBTQ+ community campus to support and empower our community. They both offer powerful insights into the harm of conversion practices and emphasize the need for change in both medical and legal systems. They also highlight the ongoing progress in understanding gender and sexuality, and how their work through retreats, activism, and community-building helps queer and trans youth find their voice and power.Prince Manvendra Singh Gohil, the Crown Prince of Rajpipla, hails from the 650-year-old Gohil Dynasty and made history as the first Indian royal to publicly come out as gay. A global icon in the fight for LGBTQ+ rights, Prince Manvendra is the chairperson and co-founder of Lakshya Trust, which serves to empower the LGBTQ+ community in India. He has been featured on platforms such as Oprah Winfrey and Keeping Up with the Kardashians. A passionate advocate for HIV awareness, he serves as the Brand Ambassador for the AIDS Healthcare Foundation India Cares. Currently, Prince Manvendra is spearheading the development of an LGBTQIA+ community campus in India, a revolutionary project aimed at social and financial empowerment for LGBTQ+ individuals.Joining him is HH Prince DeAndre, the Duke of Hanumanteshwar. An esteemed author and LGBTQ+ activist, Prince DeAndre is the Creative Director of H1927LLC, blending fashion with philanthropy through the "Fashion for a Cause" initiative. He co-authored the memoir A Royal Commitment: Ten Years of Marriage and Activism with Prince Manvendra, documenting their powerful journey of love and advocacy. Prince DeAndre's leadership in wellness is evident through his exclusive retreats, which merge yoga, cultural experiences, and direct engagement with royalty. His personal story of resilience, especially as he navigates life with Spondyloarthritis and hidden disabilities, serves as an inspiring reminder of the strength found in transformation and activism.Together, Prince Manvendra and Prince DeAndre continue to break boundaries and advocate for equality, sharing their insights and experiences in this inspiring episode.Key Topics Covered: • The lasting impacts of conversion therapy and the ongoing fight for marriage equality in India• Advocacy work and the creation of an LGBTQ+ community campus in India• Stories of strength, resilience, and the importance of living authentically• How societal pressures and religious beliefs influence family dynamics and harm LGBTQ+ individuals• The role of spiritual practices like yoga in healing and self-discovery for the LGBTQ+ community• The intersection of personal and political journeys: fighting for rights while living authentically• Activism in both the U.S. and India, and how the two worlds intersectSpecial Mentions:A new line of gender-fluid swimsuits and underwearUpcoming yoga retreats and spiritual gatherings that focus on queer wellnessThe importance of listening to queer and trans elders for wisdom and guidanceGuest Info: Our guests include activists and creators of significant change within the LGBTQ+ community. Stay connected with them on social media:Instagram:@princemanvendragohil@duke.hanumanteshwar@haumanteshwar1927tmConnect with Your Host Stevie: QueerStorytimeThePodcast@gmail.com Join the QST Community Facebook Group: Come connect with our vibrant community here, it's free to join! Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, and special news, events, and future opportunities Email List Sign-Up: http://eepurl.com/iSc-HQLeave A Star Rating, Written Review, & Follow QST Podcast: I encourage QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support Stevis as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)Support this podcast at — https://redcircle.com/queer-story-time-the-podcast/donations
In this deeply heartfelt and insightful episode, we sit down with Dr. Lulu a queer Nigerian-born pediatrician, former Lt. Col. in the U.S. Air Force, and a mom of a transgender daughter. Her mission centers on youth suicide prevention, particularly among Black gender-diverse youth, and she offers gender-affirming coaching through her practice, Dr. Lulu's PRIDE Corner. Recognized for her advocacy, she has received multiple awards, including the 2021 San Antonio LGBT Chamber Youth Advocate of the Year and the Atlanta Trans Life Award's Pioneer of the Year. In this conversation we explore themes of radical self-love, parenting queer & trans children, and the interconnectedness of community in supporting gender and sexually expansive individuals. Dr. Lulu shares her personal journey, the inspiration behind her work, and actionable insights for parents, educators, and allies alike.Dr. Lulu emphasizes why the future is queer and why inward work is essential to building a more inclusive and affirming world. From asking, "What if my child is queer?" to the transformative power of radical self-belief, this episode is a must-listen for anyone seeking to expand their understanding and support for the LGBTQIA+ community.Key Highlights:Dr. Lulu's journey to radical self-love and acceptance.The importance of unlearning biases and relearning acceptance.Addressing societal fears around queerness: "What if my child is queer?"The significance of creating affirming spaces for queer/trans youth.The power of the village: collective responsibility in raising and saving children.Dr. Lulu's work with her nonprofit, Lulu's Angels Haven Inc, providing safe spaces for Black queer youth.Insights for parents and allies on building supportive environments.Dr. Lulu's upcoming books and ongoing advocacy work.Where to Find Dr. Lulu:Website: www.dr-lulu.comInstagram: @themomatricianFacebook: Dr. Lulu Angels HavenNonprofit: Dr. Lulu's Angels Haven Inc.Podcast: Moms 4 Trans KidsConnect with Your Host Stevie: QueerStorytimeThePodcast@gmail.com Join the QST Community Facebook Group: Come connect with our vibrant community here, it's free to join! Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, and special news, events, and future opportunities Email List Sign-Up: http://eepurl.com/iSc-HQLeave A Star Rating, Written Review, & Follow QST Podcast: I encourage QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support Stevis as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)Support this podcast at — https://redcircle.com/queer-story-time-the-podcast/donations
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe
Fortellingen "Alt etterpå" er skrevet av Vivian Zahl Olsen. Den er hentet fra "I de dager" - et julehefte utgitt av NMS og Verbum https://bibel.no/nettbutikk/i-de-dager-3
*Podporte podcast Dobré ráno v aplikácii Toldo na sme.sk/extradobrerano. – Progresívne Slovensko, za ním už s väčším odstupom Smer. Ale aj Hlas potácajúci sa okolo desiatich percent, nárast extrémistov aj hnutia Igora Matoviča. Minulotýždňový model NMS ukázal viaceré zaujímavé veci. Takže kto, s kým, kedy a za akých podmienok? Tomáš Prokopčák sa v podcaste Dobré ráno pýta analytika Mikuláša Hanesa z NMS. Zdroj zvukov: TASR, YouTube/Republika, Matúš Šutaj Eštok, Facebook/Hnutie Slovensko, Andrej Danko – Všetky podcasty denníka SME nájdete na sme.sk/podcasty – Odoberajte aj audio verziu denného newslettra SME.sk s najdôležitejšími správami na sme.sk/brifing
Slovakia Today, English Language Current Affairs Programme from Slovak Radio
In this Monday show, Patka is going to walk you through two topics. Recent survey by research company NMS showed that over a quarter of Slovaks have faced sexual harassment in public transport. In the second part of the show we re going to talk about how tourism in Slovakia can be more ecological and sustainable.
Welcome to Episode 17 of Queer Storytime! In this deeply moving and enlightening episode, we sit down with the incredible Jemarc Axinto (They/Them) a trauma-informed recovery coach, award-winning consultant, and devoted advocate for queer and trans healing. Together, we explore the profound journeys of self-discovery, resilience, and the power of community in the gender and sexually expansive world. Here's what you can expect:Highlights from the Episode:1. Jemarc's Journey to Self-DiscoveryJemarc reflects on what they would tell their younger self about identity: “You're gay. It's okay. You don't have to be perfect. You're allowed to make mistakes and have difficult emotions.”Their touching realization of how societal and personal pressures shaped their journey and the lessons they've embraced about self-compassion and non-attachment.2. Queer Gatekeeping and Harm Within the CommunityJemarc addresses a critical issue: the harm caused by queer gatekeeping. They share a vulnerable experience of being turned away by their own community, emphasizing the need for inclusivity and healing from internalized heteronormativity.3. Healing Through a Decolonial LensTo Filipino queer and trans youth, Jemarc offers powerful advice rooted in decolonization: “Queerness is innate. Shame tied to colonization is not your responsibility to bear.” They beautifully highlight indigenous traditions that celebrated gender and sexual diversity.4. What Lawmakers Need to HearIn a heartfelt call to global lawmakers targeting the queer and trans community, Jemarc says: “Go to therapy. Heal your trauma. Address why you feel unsafe in yourself and your position. The harm you cause others reflects your unhealed wounds.”5. Advice for Queer and Trans YouthJemarc reminds queer and trans youth, as well as adults, that self-identity is a journey: “It's okay to change your mind. Gender and sexuality are fluid. Be gentle with yourself.”6. Grounding in the BodyOn connection and presence, Jemarc discusses their struggles with technology addiction and the power of grounding through the breath: “You can't breathe in the past or future. When I'm breathing, I'm here now.”Jemarc's Current Projects:Consultation and Coaching: Working toward a sustainable balance to make their services more accessible.Upcoming Book: I Am My Own Safe Space: Transforming the World Through Trauma Healing (self-publishing soon!).YouTube Channel: Teaching wellness through the lens of pop culture. Episodes like “What Star Wars Teaches About Accepting Death” are in the works. Sign up for updates at: jmarkxsanto.com/spiritual-geek-and-newsletter.Additional Resources:Jemarc's Website: https://www.jemarcaxinto.com/Follow Jemarc on Instagram: @JemarcaxintoConnect on LinkedIn: Jemarc AxintoConnect with Your Host Stevie: QueerStorytimeThePodcast@gmail.com Join the QST Community Facebook Group: Come connect with our vibrant community here, it's free to join! Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, and special news, events, and future opportunities Email List Sign-Up: http://eepurl.com/iSc-HQLeave A Star Rating, Written Review, & Follow QST Podcast: I encourage QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support Stevis as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)Support this podcast at — https://redcircle.com/queer-story-time-the-podcast/donations
Slovakia Today, English Language Current Affairs Programme from Slovak Radio
In this Monday show, Patka is going to walk you through two topics. Recent survey by research company NMS showed that over a quarter of Slovaks have faced sexual harassment in public transport. In the second part of the show we re going to talk about how tourism in Slovakia can be more ecological and sustainable.
Carlos Romero believes that mentors hold the key for STEM education. His own story involved growing into success based on a foundation of the importance of education by his parents and the legacy of high achievement by his siblings. Carlos was inspired by teachers to take challenging steps that led him to a distinguished career in industry. Carlos Romero is the Manager of Global Technology at Walgreens Boots Alliance. Carlos serves as a board member of the National Math and Science Initiative (NMSI nms.org), a non-profit organization dedicated to equipping underserved schools and communities with vital STEM resources, and as an industry mentor at the University of Chicago's Pritzker School of Molecular Engineering.Carlos knows firsthand the impact that STEM leaders can make on young students as mentors and role models. He sees industry skills like problem solving, project management, and communication as vital skills for educators to be adding to their STEM learning.Connect with Omar:The National Math & Science Initiative nms.orgFree STEM Lessons from NMSILinkedIn: linkedin.com/in/cromerob/Chris Woods is the host of the STEM Everyday Podcast... Connect with him:Website: dailystem.comTwitter/X: @dailystemInstagram: @dailystemYouTube: @dailystemGet Chris's book Daily STEM on AmazonSupport the show
Episode SummaryIn this transformative episode of Queer Story Time, I sit down with the incredible Dr. Mojisola Edu, a healer, advocate, and practitioner devoted to creating safe spaces for trauma survivors & marginalized communities. Together, we unpack the intersections of identity, adversity, and healing—exploring how trauma impacts our bodies and the liberatory power of holistic health practices for queer and trans people.Dr. Mojisola shares her personal journey and the lessons she's learned in creating spaces where others can thrive, while I reflect on my own path, from growing up in a conservative religious environment to finding liberation through yoga, Buddhism, and body-centered therapies.We dive deep into the importance of community, self-parenting, and tools for cultivating safety in the body, especially for those of us navigating a world rife with discrimination. This episode is packed with insights, compassion, and actionable advice for anyone seeking healing and connection.What You'll Learn in This EpisodeThe power of listening to and trusting your intuition, especially as a young queer or trans person.How to find and create chosen family when biological family isn't a source of support.Why music, dance, and art are universal languages of connection and healing.The importance of addressing trauma holistically—through both therapy and body-centered practices like yoga, mindfulness, and Ayurveda.The science of the nervous system: understanding the parasympathetic "rest, digest, and heal" response versus the chronic stress of the sympathetic "fight, flight, or freeze" mode.Practical ways to move toward safety, healing, and embodiment in your daily life.How marginalization and systemic discrimination impact queer and trans bodies—and how to begin reclaiming them.About Dr. Mojisola IduDr. Mojisola Edu is a dedicated advocate for marginalized communities, bringing her expertise in holistic health and intersectionality to her work. With a background in public health, social justice, and healing arts, she offers unique insights into the complexities of healing from trauma while navigating identity and systemic oppression.Find her online:Instagram: @LoveEnergyServicesWebsite: www.loveenergyservices.comPhone: (240) 468-2571Resources MentionedYoga therapy and its role in healing trauma.The importance of the parasympathetic nervous system in reducing stress and promoting healing.Ayurvedic practices like Abhyanga massage and hydrotherapy.Eastern philosophies (Buddhism and yoga) as non-dogmatic paths for liberation from suffering.Stay tuned for Episode 17, where we dive into new stories and conversations that inspire, educate, and empower.Support the PodcastIf you loved this episode, please rate and review Queer Story Time on your favorite podcast platform. Your support helps us amplify queer and trans voices and continue creating space for healing and connection.Sending love, hugs, and healing vibes your way!
Продолжаем (но не заканчиваем) нашу серию про оптику. От истории развития магистральной связи через устройство оптических линий, разобравшись с компонентами сети DWDM к планированию и строительству - сегодняшней теме. Кто: Борис Черваков. Сетевой инженер в Яндексе Александр Лобачёв. Руководитель группы Yandex Network Influence в Яндексе Павел Остапенко. Сетевой инженер ООО "Булат" Димитрий Старых. Зам.начальника научного отдела Т8 Про что: Планирование: нужен ли мне DWDM? Полоса, матрица трафика, расстояние, топология, оптический дизайн. Строительство: прокладка кабеля (какого?), рефлектометрия, терминальные и узлы усиления, Telecom/DCI. Пуско-наладка: классы лазеров, встроенные механизмы защиты (ALS, APR, Back Reflection), запуск первой лямбды, настройка EDFA (AGC, APC), калибровка рамана, регулировка мощности (ATT, VOA, ROADM), Optical CP, OSA, OCM, NMS, LCT Апгрейд: модернизировать или строить новую Сообщение telecom №141. Оптика. Строительство появились сначала на linkmeup.
Self-care is more than a buzzword - it's a revolutionary act. In episode 15 of Queer Story Time we discuss how healing ourselves individually not only transforms our own lives but also strengthens the collective fight for change and transformation globally. Whether you're a healer, teacher, or advocate, you'll find practical tools to sustain your activism and nourish your well-being. We explore the deep connection between self-care, activism, and community building for queer and trans equity, liberation, and justice. Inspired by leaders like Audre Lorde, Angela Davis, Bell Hooks, and Grace Lee Boggs.In this episode, we cover:Why self-care is vital for sustaining activism.The wisdom of Audre Lorde, Angela Davis, Bell Hooks, and Grace Lee Boggs.Practices to process emotions, release pain, and connect inwardly, including yoga, meditation, forest bathing, and the Dutch art of "Niksen" (doing nothing).Foundational health habits to feel happier, healthier, and more whole.Takeaway Message:Caring for yourself is integral to showing up fully in the fight for justice. By healing yourself, you contribute to the healing of our entire community and world at large.Work with Me:Feeling stuck in your healing journey? As a yoga & ayurvedic therapist, Buddhist teacher, and soon-to-be naturopathic physician, I offer holistic health coaching to help you thrive.
*Podporte podcast Dobré ráno v aplikácii Toldo na sme.sk/extradobrerano. Ivan Korčok vstupuje do PS. A nový volebný model agentúry NMS ukazuje postupný pokles podpory koaličných strán. Voľby by podľa neho jasne vyhralo Progresívne Slovensko pred Smerom, no bez Hlasu by sa asi vláda aj tak vytvoriť nedala. Ako sa teda menia nálady v krajine, čo sa deje so Smerom a Hlasom a čo spraví jedna veľká opozičná strana s tými ostatnými? Tomáš Prokopčák sa v podcaste Dobré ráno pýta Petra Tkačenka. Odporúčanie: Keďže nám ide víkend, dnes odporúčam oddych: konkrétne videohru Dragon Age Veilguard, ktorú považujem za veľmi príjemne vybalancované akčné RPG. – Všetky podcasty denníka SME nájdete na sme.sk/podcasty – Odoberajte aj audio verziu denného newslettra SME.sk s najdôležitejšími správami na sme.sk/brifing
In this episode of Queer Story Time, Stevie is joined by podcast hosts Robin and Chris for a heartfelt exploration of coming out as queer later in life. They delve into the transformative power of self-acceptance and discuss the significance of individual connections and compassionate conversations, particularly with those who may struggle to understand queer and trans issues.Robin came out at 54 and has since dedicated her work to supporting others in similar situations. Her podcast, COMING OUT LATE, is among the top 1.5% globally, and her private Facebook group boasts over 5,300 members. Robin also runs three weekly support groups, offers 1-on-1 coaching, facilitates in-person retreats, and is preparing to write a guidebook on coming out late for lesbians.Chris is a 500-hour certified Yoga Teacher and is soon set to become a Certified Yoga Therapist. With a deep commitment to Yoga and recovery, Chris shares their knowledge and tools from Yoga, Ayurveda, and recovery to aid others on their healing journeys.Tune in for a powerful conversation on embracing oneself, building connections, and fostering understanding within our communities.Key Topics Discussed:Personal Growth and Sobriety: Chris and Robin delve into their recovery journeys, focusing on emotional sobriety and the importance of self-reflection, acceptance, and turning over their will to a higher power.Creating Understanding: Chris shares a recent experience helping a woman understand her niece's pronoun change, highlighting the impact of compassionate, one-on-one conversations.Grounding and Centering: The importance of being grounded and centered in oneself to navigate challenging conversations.Coming Out Later in Life: Conversations on how coming out later can be transformative and the importance of supportive spaces and allies. They share their experiences and insights on navigating this process with others who are in similar situations.Grief and Loss: Chris and Robin discuss their personal experiences with grief and loss, including how these experiences have shaped their journey and their work in supporting others.Support for Queer and Trans Youth: Tips for queer and trans youth on finding trusted allies and building support systems.Global Connections: The role of technology in creating safe spaces for queer and trans individuals globally.Future Plans: Upcoming retreats, pop-up events, and collaborations to support the queer community.Upcoming Events with Chris and Robin:Retreats: Portland, Michigan, and Provincetown.Support Groups: Messy Middle Monday, Gender Expansive Support Group, and Not Straight Support Group.Connect with Chris and Robin:Robin: comingoutlater@gmail.comChris: cmbml17@gmail.comClosing Remarks: Sincere gratitude to Chris and Robin for sharing their wisdom and experiences. Stay healthy, vibrant, queer, and well. Join us next time for more inspiring conversations and a special announcement.Announcements:Tune in for episode 15 which may occur later than mid-August due to Stevie's medical school schedule. However, stay tuned for an exciting announcement coming up in episode 15.Queer Story Time Community Facebook Group: Now live and free to join! Connect with our vibrant community here: Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, news, events, and future opportunities Email List: http://eepurl.com/iSc-HQLeave A Star Rating, Written Review, & Follow QST:I encourage QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support Stevis as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
Contributor: Taylor Lynch, MD Educational Pearls: What is NMS? Neuroleptic Malignant Syndrome Caused by anti-dopamine medication or rapid withdrawal of pro-dopamenergic medications Mechanism is poorly understood Life threatening What medications can cause it? Typical antipsychotics Haloperidol, chlorpromazine, prochlorperazine, fluphenazine, trifluoperazine Atypical antipsychotics Less risk Risperidone, clozapine, quetiapine, olanzapine, aripiprazole, ziprasidone Anti-emetic agents with anti dopamine activity Metoclopramide, promethazine, haloperidol Not ondansetron Abrupt withdrawal of levodopa How does it present? Slowly over 1-3 days (unlike serotonin syndrome which has a more acute onset) Altered mental status, 82% of patients, typically agitated delirium with confusion Peripheral muscle rigidity and decreased reflexes. AKA lead pipe rigidity. (As opposed to clonus and hyperreflexia in serotonin syndrome) Hyperthermia (>38C seen in 87% of patients) Can also have tachycardia, labile blood pressures, tachypnea, and tremor How is it diagnosed? Clinical diagnosis, focus on the timing of symptoms No confirmatory lab test but can see possible elevated CK levels and WBC of 10-40k with a left shift What else might be on the differential? Sepsis CNS infections Heat stroke Agitated delirium Status eptilepticus Drug induced extrapyramidal symptoms Serotonin syndrome Malignant hyperthermia What is the treatment? Start with ABC's Stop all anti-dopaminergic meds and restart pro-dopamine meds if recently stopped Maintain urine output with IV fluids if needed to avoid rhabdomyolysis Active or passive cooling if needed Benzodiazapines, such as lorazepam 1-2 mg IV q 4hrs What are active medical therapies? Controversial treatments Bromocriptine, dopamine agonist Dantrolene, classically used for malignant hyperthermia Amantadine, increases dopamine release Use as a last resort Dispo? Mortality is around 10% if not recognized and treated Most patients recover in 2-14 days Must wait 2 weeks before restarting any medications References Oruch, R., Pryme, I. F., Engelsen, B. A., & Lund, A. (2017). Neuroleptic malignant syndrome: an easily overlooked neurologic emergency. Neuropsychiatric disease and treatment, 13, 161–175. https://doi.org/10.2147/NDT.S118438 Tormoehlen, L. M., & Rusyniak, D. E. (2018). Neuroleptic malignant syndrome and serotonin syndrome. Handbook of clinical neurology, 157, 663–675. https://doi.org/10.1016/B978-0-444-64074-1.00039-2 Velamoor, V. R., Norman, R. M., Caroff, S. N., Mann, S. C., Sullivan, K. A., & Antelo, R. E. (1994). Progression of symptoms in neuroleptic malignant syndrome. The Journal of nervous and mental disease, 182(3), 168–173. https://doi.org/10.1097/00005053-199403000-00007 Ware, M. R., Feller, D. B., & Hall, K. L. (2018). Neuroleptic Malignant Syndrome: Diagnosis and Management. The primary care companion for CNS disorders, 20(1), 17r02185. https://doi.org/10.4088/PCC.17r02185 Summarized by Jeffrey Olson MS2 | Edited by Meg Joyce & Jorge Chalit, OMSIII
Repaso rápido de los títulos a los que he estado jugando este fin de semana, Rise of the Ronin, Alien Isolation y Avatar. Además, las ventas de NMS se han visto repuntadas gracias a la última gran actualización. Hazte mecenas y apóyame en Patreon Sigue a @a_marquino en Twitter.
In this episode of Queer Story Time, host Stevie interviews Ben Greene (he/him), an international public speaker, transgender man, and author of the book "My Child is Trans, Now What?". As a passionate advocate for transgender youth, Ben strives to meet everyone with compassion, regardless of their starting point. Join us as Ben shares insights on the importance of unconditional parental support, the impact of societal expectations, and his ongoing advocacy work.Key Points:Parental Support: Ben emphasizes that children exploring their gender identity need unconditional love and support from their parents. Delaying support can lead to feelings of isolation and a desire to leave home as soon as possible.Exploration: Allowing children to explore their gender identity without judgment is crucial. Exploration is a natural part of childhood and should not be restricted by rigid gender norms.Gender Expectations: Parents often impose their own gender expectations on their children. Letting go of these expectations benefits all children, helping them develop into emotionally intelligent and confident adults.Misconceptions: Ben discusses his own fears and misconceptions about hormone therapy, noting that societal narratives often distort the realities of transitioning.Identity: It's important to recognize that trans individuals have multifaceted identities beyond their gender. Trans people lead diverse and fulfilling lives.Advocacy: Ben addresses lawmakers, highlighting that anti-trans laws are unpopular and fail to address pressing societal issues. He urges them to consider the broader impact of their actions.Support Networks: Queer and trans youth should find supportive networks and allies to help them navigate challenges and advocate on their behalf.Storytelling: Sharing personal stories is a profound way to foster connection and understanding. Holding space for each other's truths is essential.Future Projects: Ben is working on a fantasy novel that explores the magic inherent in the trans experience, aiming to put more positive and diverse narratives into the world.Connect with Ben:Instagram: @pseudo.broTikTok: @pseudobroVisit his website: www.bgtranstalks.com Announcements:Queer Story Time Community Facebook Group: Now live and free to join! Connect with our vibrant community here: Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, news, events, and future opportunities Email List: http://eepurl.com/iSc-HQLeave A Star Rating, Written Review, & Follow QST:I encourage QST listeners to leave a star rating, and a written review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support Stevis as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
Welcome to the CORE podcast, where we have thoughts on all things gaming, including new Comscore data showing gamers might be caring about ads in games differently than you think. Plus, the Deadpool bum controllers, Diablo and the new Spiritborn class, the new NMS update, and Concord getting some nice pre-release love from players. Plus a souls-like phone call, and more!GAMES PLAYEDSHAREDFF14Elden RingNo Man's Sky updateSCOTTCrab ChampionsBomb Rush CyberFunnkAce Combat 7: Skys UnknownJONSee Shared!BEAUMore ZZZ! (not sleep) Hosted on Acast. See acast.com/privacy for more information.
Welcome to the CORE podcast, where we have thoughts on all things gaming, including new Comscore data showing gamers might be caring about ads in games differently than you think. Plus, the Deadpool bum controllers, Diablo and the new Spiritborn class, the new NMS update, and Concord getting some nice pre-release love from players. Plus a souls-like phone call, and more!GAMES PLAYEDSHAREDFF14Elden RingNo Man's Sky updateSCOTTCrab ChampionsBomb Rush CyberFunnkAce Combat 7: Skys UnknownJONSee Shared!BEAUMore ZZZ! (not sleep) Hosted on Acast. See acast.com/privacy for more information.
In this episode of "Queer Story Time," Stevie interviews Alyy Patel, a queer South Asian gender-fluid individual, about their experiences and perspectives on identity, culture, and spirituality.Key Points Discussed:Navigating Dual Identities:Alyy shares their journey of discovering and embracing both their queerness and South Asian identity, rejecting the idea that they must compromise one for the other.Impact of Colonialism:They discuss how colonialism influenced their family's perspectives on queerness, emphasizing that resistance to queerness in their community often stems from colonial history.Spirituality and Queerness:Alyy finds peace through spirituality, noting that their religion inherently includes queer elements.Queer Community Visibility:Alyy stresses the importance of queer South Asian visibility and encourages others to protect themselves while living authentically, even if it means living a double life.Policy and Lawmakers:Alyy criticizes lawmakers who create anti-queer legislation, urging them to use research and engage with queer communities to make informed decisions.Coming Out Advice for Queer Youth:Alyy advises queer and trans South Asian youth to prioritize safety and financial security, challenging the Western narrative that coming out is necessary for queer validity.Future Aspirations:Alyy hopes to continue advocating for queer South Asian visibility and to speak on larger platforms to share their message.Connect with Alyy:Instagram @alyypatelVisit their website - www.alyypatel.comAnnouncements:Queer Story Time Community Facebook Group: Now live and free to join! Connect with our vibrant community here: Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, news, events, and future opportunities Email List: http://eepurl.com/iSc-HQLeave A Review & Follow QST:I encourage QST listeners to leave a review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support my work as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
Episode Description:Join us for the final Pride Month episode of 2024, commemorating the 55th anniversary of the Stonewall Uprising. In this episode, we'll explore Stonewall and significant moments in global queer and trans history.Episode Highlights:Historical Context:Queer and trans identities have always existed, even if modern terminology did not. Institute of Sexual Research (Early 1900s):Founded by Dr. Magnus Hirschfeld in Berlin, a haven for queer and trans research and clinical care. Nazi Suppression:The destruction of the Institute of Sexual Research by the Nazis in 1933, burning 20,000 books important to queer/trans research & identity. World War II:Queer military personnel found solidarity despite discrimination.Conversion Therapy:Establishment medicine wrongly pathologized homosexuality, leading to harmful conversion practices. Mattachine Society:Early LGBTQ+ rights organization advocating for civil rights and dignity.Stonewall Uprising:Police raids at the Stonewall Inn led to the historic uprising on June 28, 1969.Birth of Pride:The first Pride parade was organized on June 28, 1970, marking the Stonewall anniversary.Global Decriminalization:Many countries decriminalized homosexuality in the 1960s and beyond, with ongoing struggles in Asia.Conclusion: Honoring the contributions of those who fought for LGBTQ+ rights, we celebrate Pride as an ongoing fight for queer & trans equity, equality, and liberation.Announcements:Queer Story Time Community Facebook Group: Now live and free to join! Connect with our vibrant community here: Facebook Group: https://www.facebook.com/share/JCiyGgCnpX7gPbfU/?mibextid=K35XfQueer Story Time Email List: Stay updated with QST episodes, news, events, and future opportunities Email List: http://eepurl.com/iSc-HQLeave A Review & Follow QST:I encourage QST listeners to leave a review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Support QST & Buy Me A Coffee:If you'd like to support my work as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
*Podporte podcast Dobré ráno v aplikácii Toldo na sme.sk/extradobrerano a v súťaži Podcast roka 2024 na podcastroka.sk. V sobotu budeme znovu voliť, tentoraz v eurovoľbách. A ako to už u nás býva, odzajtra bude platiť moratórium. Ešte predtým však agentúra NMS pre SME pripravila volebný model, ktorý hovorí o niektorých zaujímavých posunoch v preferenciách politických strán. Tomáš Prokopčák sa v podcaste Dobré ráno rozpráva s analytikom Mikulášom Hanesom z NMS. Zdroje zvukov: YouTube/Andrej Danko, SMER-SD, Republika, Facebook/HLAS-SD, Progresívne Slovensko Odporúčanie: Viete, aký má vesmír tvar? Ani ja nie, ale zrejme bude oveľa komplexnejší, ako sa domnievame. A práve na čierne diery a na ich vplyv na štruktúru nášho kozmu sa pozrel magazín Scientific American a text Koľko dier má vlastne vesmír je mojím dnešným odporúčaním. *Podporte podcast Dobré ráno v aplikácii Toldo na sme.sk/extradobrerano – Všetky podcasty denníka SME nájdete na sme.sk/podcasty – Odoberajte aj audio verziu denného newslettra SME.sk s najdôležitejšími správami na sme.sk/brifing – Odoberajte mesačný podcastový newsletter nielen o novinkách SME na sme.sk/podcastovenovinky – Ďakujeme, že počúvate podcast Dobré ráno.
Introduction:Join host Stevie as they kick off Pride Month 2024 with a heartfelt conversation about queer and trans history. Stevie is joined by a diverse panel of guests, including Tricia D. Carlisle (She/They), Jackson Pace (He/Him), Michael Peterson (He/Him), and Dr. Jampa Wurst (They/Them). These esteemed elders in the queer and trans community share their insights and reflections on the rich tapestry of LGBTQ+ history. Delving into the evolution of the queer and trans community, highlighting pivotal moments in history and the resilience that has defined the movement. From personal anecdotes to discussions about education, stigma, and medical advancements, this episode offers a nuanced look at the challenges and triumphs experienced by LGBTQ+ individuals. Through candid conversations and shared experiences, the panelists celebrate pride, resilience, and the importance of continuing the fight for equality.Connect with our guests:Tricia D: tiktok.com/@triciadcarlisle and Instagram @chorkiemamaJackson Pace: Instagram @jmackenziepaceMichael Peterson: Instagram @michaeljonpetersonDr. Jampa Wurst: Instagram @queerandbuddhistStevie expresses gratitude to the guests for their insightful contributions and encourages listeners to continue engaging in conversations about queer and trans history throughout Pride Month and beyond.Stay tuned for episode 11 as we continue the celebration of Pride Month 2024 and the vibrant history and resilience of the LGBTQ+ community!Leave A Review, Follow QST, & Get In-Touch:I encourage QST listeners to leave a review on the podcast platform of your choice and to share the podcast with friends and family! This helps QST expand to an even bigger audience globally.Be sure to follow your host Stevie on Instagram @queertransthriving and the QST YouTube Channel: https://www.youtube.com/channel/UCsV_UVohIXCZkSXExp8aYkA Get In-Touch with Stevie via E-Mail: queerstorytimethepodcast@gmail.comSupport QST & Buy Me A Coffee:If you'd like to support my work as your QST host, please consider buying me a coffee at this link and check-out my additional offerings: https://buymeacoffee.com/queertransthriving Host: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
Janna (she/her), a devoted parent of a trans son, shares her journey of acceptance and advocacy, urging lawmakers to embrace diversity and avoid imposing beliefs on others. She addresses misconceptions about trans people, offers support to parents, and emphasizes the importance of love and understanding. Jana's story is a testament to the power of love and the importance of embracing diversity within our communities.Key Points:Drawing from her personal experiences, Janna sheds light on the importance of inclusivity and compassion within religious communities, especially the Christian community by challenging traditional beliefs in order to foster understanding and acceptance for all individuals. Addressesing lawmakers worldwide, Janna urges them to see the bigger picture. She uses the analogy of a forest to illustrate the importance of diversity. Just as a forest thrives with different types of trees, society flourishes with diverse beliefs and ways of living.Further, she dispels myths about trans folks and highlights the real threats they face against their existence by tackling the unfounded fears surrounding trans people; especially in the context of bathroom usage by emphasizing there is no evidence of trans individuals causing harm to others.Throughout the episode, Janna offers guidance and reassurance to parents struggling with their child's identity, emphasizing love and understanding. She also shares a tender moment with her son, and reflects on the joy and acceptance she witnessed at an LGBTQ convention.Overall, Janna's insights highlight the beauty of diversity and the power of acceptance. Connect with Janna:Instagram/TikTok: @JannatransmamaEmail: jannatransmomma@gmail.comUpcoming Episode: Stay tuned for episode TEN of QST as we celebrate this milestone episode, Pride Month, and our honoring our Queer & Trans Elders.Be sure to follow Stevie on social media for QST updates and more inspiring content @queertransthriving on Instagram. Donations:To support this podcast, please make your one-time or ongoing donation in a way that is sustainable to you by contributing here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
In this deeply engaging episode of Queer Story Time, Stevie sits down with the inspiring Garland Guidry for a candid and heartfelt conversation. Garland is a hairstylist, author, motivator and consultant. She shares her journey of self-discovery, embracing authenticity, and finding connection within the queer and trans community. From discussing the importance of supportive friendships to navigating the challenges of politics and legislation, Garland offers wisdom and insights that resonate deeply with listeners. She also shares exciting news about her new book "My Romance His Friendship" and her vision for the future.Key Points:· Garland emphasizes the importance of being on purpose and in purpose, rather than searching for a single defining purpose.· As she reflects on her own experiences, Garland advises her younger self to worry less and have more fun, emphasizing the importance of self-acceptance.· She also highlights the need for understanding and acceptance within the queer and trans community, stressing that love and support don't require full understanding.· Discussing lawmakers and legislation, Garland shares her disillusionment with politics and calls for a collective shift towards authenticity and kindness.· Shares top tips for queer and trans youth.Connect with Garland Guidry:Instagram: @iamladygarlandStay tuned for the release of Garland's new book, "My Romance His Friendship," available on Kindle and Audible.Coming-Up Next: Don't miss the upcoming episode of Queer Story Time coming out in two weeks! Be sure to follow Stevie on social media for updates and more inspiring content @queertransthriving.Donations:To support this podcast, please make your one-time or ongoing donation in a way that is sustainable to you by contributing here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
Introduction: In this episode, we welcome Jackson Pace (he/him), a resilient advocate for LGBTQ+ rights and a passionate believer in the power of authenticity. Jackson shares his remarkable journey, navigating the complexities of step-parenting, embracing their identity as a trans masc later in life, and becoming a vocal advocate for equality and acceptance. Through personal anecdotes and insightful reflections, Jackson invites listeners to explore the transformative power of resilience, community, and connection with nature.*Trigger Warning: This episode contains brief conversation of a suicide attempt and sexual abuse*Key Points:Navigating Step-parenting: Reflecting on seven years as a step-parent, Jackson discusses the complexities of parenting dynamics and the struggle of instilling values amid resistance.Embracing Authenticity: Transitioning at 59, he discuss the process of coming out as trans and finding fulfillment in living authentically by emphasizing self-acceptance.Discovering Connection: Finding support and community online, Jackson shares the positive impact of social media in fostering connections within the LGBTQ+ community.Advocacy and Resilience: Addressing discriminatory laws, he urges for acceptance and protection of LGBTQ+ individuals, emphasizing resilience and collective advocacy.Nature and Connection: Finding solace in nature, Jackson discusses the healing power of being outdoors and connecting with the Earth.Through his personal narratives, Jackson inspires listeners to embrace authenticity, advocate for change, finding connection in nature and community, and highlighting the transformative power of passion and resilience.Connect with Jackson Pace:Instagram: @jmackenziepaceStay tuned for Episode 8 of Queer Story Time.Donations:To support this podcast, please make your one-time or ongoing donation in a way that is sustainable to you by contributing here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., YT, AWC, NMS-4 (they/them or she/her)
No More Secrets Mind Body Spirit Inc., a comprehensive sexuality awareness organization founded and run by a dynamic mother-daughter duo officially opened the nation's first and only Menstrual Hub and Uterine Wellness Center in the nation, “The SPOT Period”, on February 20th, 2021, during a national pandemic through a crowdfunding campaign. No More Secrets MBS Inc.'s mission is to eradicate period poverty and period stigma nationally through a menstrual and social justice framework. In 2021 alone NMS independently distributed over 6 million Menstrual products throughout the nation to end period poverty and educate over 250,000 individuals to decrease period stigma in vulnerable populations.
Join us for a transformative chat with Dr. Michelle Cromwell, an experienced anti-oppression scholar-activist, conflict coach, and dialogue facilitator. With almost two decades in higher education, including tenure as an associate professor and as a vice president of diversity, equity, and inclusion, Dr. Cromwell now tackles organizational pain related to equity issues as an independent consultant. As an equity-centered wellness practitioner and travel curator, they empower others to live authentically and compassionately. Get ready for insights on identity, advocacy, and holistic wellness!· Dr. Cromwell emphasizes that gender identity is not monolithic, highlighting the importance of understanding and respecting the diverse experiences within the non-binary community.· Drawing from Buddhist practices, Dr. Cromwell encourages mutual support by asking for what one needs while offering what they can, fostering interconnectedness and empathy.· Dr. Cromwell passionately calls for an end to discriminatory laws targeting the queer and trans community, urging lawmakers to recognize the humanity of all individuals and promote equality.· Their advice for queer and trans youth emphasizes finding supportive communities that celebrate authenticity, enabling individuals to navigate challenges with confidence and resilience.· Dr. Cromwell finds fulfillment in embracing their wholeness and pursuing activities that bring joy, emphasizing the importance of meaningful connections with others.· Dr. Cromwell focuses on expansion and alignment through immersive travel experiences, empowerment pods, and a summit aimed at helping women achieve alignment and empowerment.Connect with Dr. Michelle Cromwell:Facebook: Dr. Michelle CromwellLinkedIn: Dr. Michelle CromwellWebsite: FullyAlignedWoman.comStay tuned for Episode 7 of Queer Storytime, coming at the end of February!Donations:To make your one-time or ongoing donation to support this podcast, please contribute here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., C-IAYT, AWC, NMS-4 (they/them or she/her)
Welcome, everyone, to Episode 5 of the Stevie Inghram Podcast! I'm your host, Stevie Inghram, and I'm thrilled to kick off 2024 with you. This episode holds a special place in my heart, and I want to express my gratitude for your continued support.Acknowledging the Journey:In this episode, I reflect on the journey to bring you this special episode as I navigate the final stretch of medical school, I offer my gratitude for your patience and understanding.This episode aims to provide healing, peace, and fortitude as we step into 2024.As a lifelong dharmic practitioner, I introduce a transformative practice from Dharmic traditions- Metta Meditation. This practice goes beyond dogma, offering solace and ease to all, regardless of spiritual beliefs.The meditation begins with extending well-wishes to a loved one, a neutral person, ourselves, and ultimately, to the entire world community. Stevie guides listeners through phrases of loving-kindness, fostering a heart-centered awareness.No Escape, Just Connection:Stevie emphasizes that these practices aren't about escapism but rather a means to tune into our inner landscape. The intention is to offer well-wishes of health, well-being, vitality, love, gentleness, kindness, goodwill, and benevolence.As we conclude this Metta Meditation practice, Stevie encourages listeners to return to this place of peace and ease whenever needed. Stay tuned for Episode 6 of Queer Storytime on February 12, and from Stevie's heart to yours, may you continue to be happy, healthy, live with ease, authenticity, and be free from suffering.Be well, everyone. Happy 2024!Donations:To make your one-time or ongoing donation to support this podcast, please contribute here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., C-IAYT, AWC, NMS-4 (they/them or she/her)
My Light No Fire trailer reaction was incredibly positive and I think this is the next big game from Hello Games, the studio behind No Man's Sky. The Light No Fire gameplay is very reminiscent of No Man's Sky gameplay in 2023. The Sean Murray Game Awards appearance and announcement also seemed to take aim at Starfield. I also want to look at the Light No Fire reaction from other gamers. Some think that they can't trust Hello Games after the NMS release. And while we wait for the Light No Fire release date I want to talk about why I think this game will be massive and why I trust the developers. Light No Fire is a game about adventure, building, survival and exploration together. Set on a fantasy planet the size of Earth, it brings the depth of a role playing game to the freedom of a survival sandbox. Reforge Gaming is a live talk show hosted by Lono covering the hottest and newest topics in a variety of gaming news with unmatched interaction, live event coverage, and question and answer segments. It is a live gaming podcast, weekdays @9:00 AM EST Don't have a video? Watch This Episode on YouTube We have a passionate community that loves gaming! JOIN OUR DISCORD Coffee drinker? If you've never tried a balanced acidity coffee, try - REFORGE ROAST We love having our audio listeners in the audience for the live show! - REFORGE GAMING --- Send in a voice message: https://podcasters.spotify.com/pod/show/reforgegaming/message
In this episode, Dr. Jampa Wurst, a queer non-binary Buddhist, delves into the significance of international queer Buddhist conferences and the creation of safe spaces within the LGBTQ+ community. The conversation unfolds around themes of connectivity, embracing diversity, and challenging societal norms.Dr. Jampa reflects on the fluidity of gender and sexuality, drawing parallels between the expansive nature of Buddhism and the multifaceted representation in sci-fi series like Doctor Who. The discussion explores the intersection of identity, labels, and the Buddhist concept of emptiness, emphasizing the coexistence of relative and absolute dimensions.The episode touches on the challenges faced by the LGBTQ+ community, particularly legislative issues globally. Dr. Jampa advocates for empathy, urging lawmakers to walk in the shoes of those affected by discriminatory laws.As the founder of the International Queer Buddhist Conference (IQBC), Dr. Jampa shares insights into the origins of the conference and its role as a global family providing a sense of belonging and support. They discuss the evolving landscape of the conferences and future plans, highlighting the increasing focus on transgender and non-binary issues within the Buddhist framework.The podcast concludes with Dr. Jampa's thoughts on life purpose, resilience, and a call to action for queer and non-binary youth to find safety in community. The significance of chosen family and the power of global connections, particularly through virtual events during the pandemic, are emphasized.Listeners are encouraged to explore the International Queer Buddhist Conference (IQBC) for a deeper understanding of the topics discussed.Join the conversation, connect with Dr. Jampa, and explore the rich tapestry of experiences within the LGBTQ+ community and BuddhismLinks:· https://iqbc.org/Instagram - @queerandbuddhist Donations:To make your one-time or ongoing donation to support this podcast, please contribute here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., C-IAYT, AWC, NMS-4 (they/them or she/her)
Welcome to Queer Storytime! In this episode, we dive deep into a conversation with Drummond Culture, a vibrant voice in the queer community, exploring topics such as trauma, healing, self-love, and the strength of the queer and trans community. Episode Highlights:Trauma and Healing: Drummond discusses the impact of trauma on the queer and trans community and emphasizes that healing is intrinsically tied to connection. The discussion sheds light on the need for reconnecting with oneself, the community, and nature as a vital part of the healing journey.Authenticity and Connection: The conversation delves into the power of living authentically and connecting with others. Drummond shares personal insights into how embracing authenticity has led to personal healing and provided an opportunity to help others on their healing journeys.Love as a Force for Change: The hosts highlight the strength and love within the queer community, challenging the notion that vulnerability is a weakness. Drummond underscores the importance of love as a powerful energy, capable of creating positive change in the world.Challenges Faced by the Queer Community: The episode discusses the challenges posed by societal expectations, prejudices, and the need for legislation that protects the rights of the queer and trans community. Drummond encourages seeing the community as a source of strength, love, and solutions to broader societal issues.Positive Talks and Creating Safe Spaces: Drummond introduces Positive Talks, an initiative aimed at sharing diverse perspectives and creating positive spaces. The discussion unfolds into Drummond's vision for Positive Space, a virtual community, and plans to extend this positive influence to college campuses.Show Conclusion: The conversation wraps up with insights on self-love, the importance of community for queer and trans youth, and the need for inward reflection to understand how we may unintentionally cause harm to others. Connect with Drummond:Instagram & YouTube: @DrummondCultureEmail: drummondculture@gmail.comLife Coaching: drummondculture.comPositive Talks & Positive Space:· Positive Talks on Instagram Live· Positive Space Virtual CommunityDonations:To make your one-time or ongoing donation to support this podcast, please contribute here:· Venmo- @stevie-inghram· CashApp- $stevieinghram· PayPal- @jsinghramHost: Stevie Inghram, M.S., C-IAYT, AWC, NMS-4 (they/them or she/her)
Dr. Kimberly Nordstrom, Past President of the American Association for Emergency Psychiatry and Associate Clinical Professor of Psychiatry at the University of Colorado, discusses the process of considering medical contributions to psychiatric illness. We discuss red flags that should guide clinicians to start thinking medically, explain the importance of systematically approaching a differential diagnosis, and provide a brief introduction to a few common medical-psychiatric conditions including autoimmune encephalitis, neuroleptic malignant syndrome (NMS), and serotonin toxicity ("serotonin syndrome").Book: Quick Guide to Psychiatric Emergencies: Tools for Behavioral and Toxicological Situations (Check your academic library!!!)
In this Season 8 Episode 4 of Milkcrates & Microphones, we get a visit from trailblazing hip-hop artists—Bigg Jus (of Company Flow) & Orko Eloheim (The Sycotik Alien) aka Nephlim Modulation Systems. Jus & Orko dive into a number of topics including how they found their way into hip-hop & making music, the first time they crossed paths with each other, forming and making music with Company Flow, why the time is right for a 3rd NMS album, questioning what we've been taught, the lost arts of diggin' in the crates & tape-trading, the iconic Access Hip-Hop shop in San Diego, AI music, plus so much more. We also bring you your favorite Milk&Mics segments like This Week in Hip Hop and Song Picks of the (Motha Fuckin') Week”, NMS style. Enjoy. Subscribe and tell a friend Follow Orko Eloheim on Instagram here: @orkoeloheim_ultranet Follow us on Youtube @ https://www.youtube.com/channel/UC5Jmk_m0_zhxjjYRHWDtvjQ on Instagram @ https://www.instagram.com/milkandmics/?hl=en and Facebook @ https://www.facebook.com/milkandmics/
Podcasting 2.0 March 17th 2023 Episode 126: "Podcasting Power Rangers" Adam & Dave discuss the week's developments on podcastindex.org - There's a huge V4V Opportunity up for grabs, topped up with some sexy namespace talk ShowNotes We're LIT Defcon 1 alert! Listening to NMS and PWR PodcastEZ - Build, Launch and Grow Your Podcast. Just Speak! Sam Sethi is off skiing in Kitzbuhl .. probably with Roy from Breez Courtney Kocak Private Parts Unknown Todd 30% fill rate Video podcasts - it failed before Difference between NOSTR and V4V - Splits for developers! Github is becoming a free for all of features PodPing Congestion Conference Room Music Side Project Transcript Search What is Value4Value? - Read all about it at Value4Value.info V4V Stats Last Modified 03/17/2023 14:40:53 by Freedom Controller