Podcasts about recurrent

  • 701PODCASTS
  • 1,347EPISODES
  • 34mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about recurrent

Show all podcasts related to recurrent

Latest podcast episodes about recurrent

Fertility Confidence Podcast
The link between asthma and recurrent miscarriage

Fertility Confidence Podcast

Play Episode Listen Later May 28, 2025 28:08


In this episode we are exploring the controversial connection between asthma and fertility. I'm going to share some of the research we have so far showing how asthma can impact pregnancy outcomes, particularly in relation to recurrent miscarriage. How the immune system plays a role in infertility and miscarriage is not talked about - but this is impacting so many couples worldwide! I cover we we need to care, what the testing looks like and how you can support your body if you come back with positive markers.   Want more support? Join me in my free 5-day Fertility Confidence Bootcamp starting June 2nd and get started on the next step of your fertility care plan with me. Register at ttc.kelseyduncan.com/bootcamp   Thank you to our amazing podcast sponsor, Needed. You can save 20% off your first order with code DRKELSEY at thisisneeded.com.   Want a step by step plan backed by science and based on your own unique fertility identifiers? Let's get to the root cause together and make a baby. Learn more about how Fertility Confidence Method can support you at downloads.kelseyduncan.com/results  

Primary Care Knowledge Boost
Recurrent Epistaxis in Children

Primary Care Knowledge Boost

Play Episode Listen Later May 21, 2025 17:07


Episode three of four on Paediatric ENT. Doctors Lisa and Sara are back with Paediatric Ear Nose and Throat Consultant Dr Simone Schaefer for this episode on Recurrent Epistaxis in Children. We discuss important differentials, including a rare condition that can present in predominantly teenage boys not to be missed, before moving on to discuss options for management and why the vast majority of these patients can often be safely managed in the community. We discuss cases that would be useful to be seen by the ENT team. Short and sweet, full of useful resources.   You can use these podcasts as part of your CPD - we don't do certificates but they still count :) Resources: Success Rates of Naseptin (Chlorhexidine dihydrochloride and neomycin sulfate) in reducing Epistaxis: Garry S, Wauchope J, Hintze J, Ryan E, O'Cathain E, Heffernan C. Factors affecting Naseptin treatment success – A prospective cohort study. International Journal of Pediatric Otorhinolaryngology. Volume 171. 2023: https://www.sciencedirect.com/science/article/abs/pii/S0165587623001878#:~:text=80.8%25%20(n%20%3D%20101),effects%20(skin%20irritation%20etc.) ENT UK: How to use a Nasal Spray: https://www.entuk.org/patients/conditions/79/how_to_use_nasal_sprays/ Asthma and Lung UK, How to use a Nasal Spray useful Video for patients: https://www.youtube.com/watch?v=S31maomo1xQ Alder Hey Children's Hospital Patient leaflet: Nosebleeds: https://www.alderhey.nhs.uk/conditions/symptoms-checker/nosebleeds/ ___ We really want to make these episodes relevant and helpful: if you have any questions or want any particular areas covered then contact us on Twitter @PCKBpodcast, or leave a comment on our quick anonymous survey here: https://pckb.org/feedback Email us at: primarycarepodcasts@gmail.com ___ This podcast has been made with the support of GP Excellence and Greater Manchester Integrated Care Board. Given that it is recorded with Greater Manchester clinicians, the information discussed may not be applicable elsewhere and it is important to consult local guidelines before making any treatment decisions.  The information presented is the personal opinion of the healthcare professional interviewed and might not be representative to all clinicians. It is based on their interpretation of current best practice and guidelines when the episode was recorded. Guidelines can change; To the best of our knowledge the information in this episode is up to date as of it's release but it is the listeners responsibility to review the information and make sure it is still up to date when they listen. Dr Lisa Adams, Dr Sara MacDermott and their interviewees are not liable for any advice, investigations, course of treatment, diagnosis or any other information, services or products listeners might pursue as a result of listening to this podcast - it is the clinicians responsibility to appraise the information given and review local and national guidelines before making treatment decisions. Reliance on information provided in this podcast is solely at the listeners risk. The podcast is designed to be used by trained healthcare professionals for education only. We do not recommend these for patients or the general public and they are not to be used as a method of diagnosis, opinion, treatment or medical advice for the general public. Do not delay seeking medical advice based on the information contained in this podcast. If you have questions regarding your health or feel you may have a medical condition then promptly seek the opinion of a trained healthcare professional.

The Rebooting Show
Henry Blodget on the media business in 2025

The Rebooting Show

Play Episode Listen Later May 20, 2025 63:42 Transcription Available


Henry Blodget built Business Insider into one of the few breakout successes of the traffic era. Now, with his new project Regenerator, he's taking a different path—one that reflects the broader shifts in digital media. We talk about the collapse of platform-driven distribution, why subscriptions are a more durable model, how venture capital never quite fit the media business, and what it takes to build in the current environment of fragmented attention and AI disruption.Check out The Rebooting's recent research report about AI and personalizationAttend The Rebooting's Online Forum with Recurrent and WordPress VIP on how Recurrent migrated its tech stack

The Rebooting Show
Anonymous Banker's bleak view of the media M&A market

The Rebooting Show

Play Episode Listen Later May 7, 2025 66:09 Transcription Available


I spoke with Anonymous Banker, an M&A advisor with a front-row view into the market for buying and selling digital media companies. Needless to say, it's a buyer's market.AB breaks down the market for digital publishing assets – broadly those with page-based models – into three types of buyers:HarvestersCAC jockeysVanity projects/rich person playthings“If you're a publisher with a mostly ad-supported site, odds are your business will be worth less next year than it is now,” he said. Deals are still getting done, but the buyers are different. These are no-name PE firms above ice cream shops in the outskirts of Miami. We go through the list, which ranges from Valnet to Static Media to Savage Ventures to Regent. The playbook is to buy undervalued media properties, slash costs, and milk the programmatic revenue with hyper-lean models that rudely dispense with the nostalgia of “when the going was good.”“Any content they invest in has to be ROI positive within 30 days,” AB said. “You'll never see them spend $20 million hoping advertisers show up. Those days are done.”Other topics we covered:How AI uncertainty is creating overhang that depresses valuations and makes long-term modeling nearly impossibleWhy the most resilient media businesses are lead-generation machines or conversion front-endsWe debate whether the Chernin Group content-to-commerce thesis was wrongHow Substack's recommendation engine is the most efficient user acquisition channel in mediaWhat kinds of content investors still believe in (hint: high-intent verticals, not general news)Check out The Rebooting's new media product research reportSign up for The Rebooting's Online Forum on May 21 at 1pmET featuring a case study on how Recurrent migrated its CMS across a portfolio of sites without disruption

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Ford Slashes Guidance, EVs Grow In Q1, Waymo Plans For Expansion

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier

Play Episode Listen Later May 6, 2025 12:37


Shoot us a Text.Episode #1037: Today we break down Ford's Q1 fallout as tariffs cast a $1.5B shadow, electrified vehicles surge to nearly 25% of new-car sales led by hybrids and non-Tesla EVs, and Waymo revs up its driverless ambitions with a new Arizona mega-factory and major city expansion plans.Show Notes with links:Ford is bracing for a bumpy ride in 2025 after revealing a steep Q1 net income drop and pulling its full-year earnings forecast. Mounting tariff costs and factory downtime from SUV redesigns weighed heavily on the automaker's performance, with more uncertainty ahead.Net income fell 65% to $471M.Ford expects 2025 tariffs to cut profits by $1.5B, despite $1B in planned offsets.CFO says Q1 tariffs cost $200M, mitigated partly by bonded carrier routes through Canada.Ford Pro earned $1.3B (down 56%); Model e lost $849M despite a 15% EV sales bump.CEO Jim Farley: “We are strengthening our underlying business with significantly better quality and our third straight quarter of year-over-year cost improvement, excluding the impact of tariffs.”Electrified vehicles accounted for nearly one in four new-car sales in Q1 2025, as hybrids led the growth and Tesla's dominance continued to wane. Buyers are moving fast—whether driven by rebates, tariffs, or just better options.EV sales hit 750,698 in Q1, up 29.6% YoY, with total electrified market share reaching 24.4%.Hybrids surged 44.1%, capturing 13.3% of all new-car sales, thanks largely to Toyota, Honda, Hyundai, Ford, Lexus, and Kia—who together own 97% of the segment.Tesla's BEV share dipped to 44.2%, while non-Tesla BEVs jumped 47%.Florida EV sales rose 42.5%, while Texas surged 37.1%, outpacing growth in traditional EV strongholds.“A significant part of it is due to automakers tapping into what drivers want. The 2025 lineup offers 71 unique models (up from 54 in 2025) with improved specs and options for every lifestyle." said Recurrent's Liz Najman. Waymo is transitioning from test phase to mass production, expanding its ride-hailing footprint while anchoring its future with a high-capacity, AV-focused factory in Arizona.Waymo One now handles 250,000 weekly rides across Phoenix, LA, SF, and Austin with expansions into Atlanta, Miami, and Washington, D.C. planned in 2026A 239,000-square-foot factory in Mesa, AZ will build thousands of autonomous Jaguars annually, in partnership with Magna.The facility will feature a fully automated line and produce vehicles with Waymo's latest sixth-gen Driver tech.“The Waymo Driver integration plant in Mesa is the epicenter of our future growth plans,” said Ryan McNamara, Waymo's VP of operations.Join Paul J Daly and Kyle Mountsier every morning for the Automotive State of the Union podcast as they connect the dots across car dealerships, retail trends, emerging tech like AI, and cultural shifts—bringing clarity, speed, and people-first insight to automotive leaders navigating a rapidly changing industry.Get the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/

UF Health Podcasts
Could a safe, effective treatment of equine recurrent uveitis be in sight?

UF Health Podcasts

Play Episode Listen Later May 5, 2025


Recurrent uveitis [you-vee-EYE-tus] is a leading cause of blindness in horses. It involves repeated…

Animal Airwaves
Could a safe, effective treatment of equine recurrent uveitis be in sight?

Animal Airwaves

Play Episode Listen Later May 5, 2025 1:00


Recurrent uveitis [you-vee-EYE-tus] is a leading cause of blindness in horses. It involves repeated inflammation of the uvea [YOU-vee-uh], the pigmented layer of the eye. In cases of equine autoimmune...

The Cribsiders
S6 Ep140: Acute Recurrent & Chronic Pancreatitis - When Belly Pain Persists

The Cribsiders

Play Episode Listen Later Apr 30, 2025 41:52


Join us for part 2 of our informative discussion with Dr. David Vitale, a pediatric pancreatologist at Cincinnati Children's Hospital. In this episode, we dive deep into acute recurrent and chronic pancreatitis, distinguishing the two, and exploring the causes, genetic predispositions, and available treatments. Whether you're a budding pancreatologist or a PCP, this episode offers valuable insights into managing and treating this challenging condition.

Primary Care Knowledge Boost
Recurrent Tonsillitis in Children

Primary Care Knowledge Boost

Play Episode Listen Later Apr 30, 2025 22:29


Episode two of four on Paediatric ENT conditions. Doctors Lisa and Sara are back with Paediatric Ear Nose and Throat Consultant Dr Simone Schaefer for this episode on Recurrent Tonsillitis in Children. We go through the definition of tonsillitis and what ENT class as recurrent Tonsillitis that would hit the criteria for Tonsillectomy in our region. We use a case and discuss why referral criteria are so strict, as well as some exceptions to the criteria.    Resources: ENT UK Recurrent Tonsillitis Decision Making Tool for Tonsillectomies (for parents): https://www.entuk.org/patients/conditions/63/helping_you_decide_about_tonsil_surgery_for_your_child NHS England ENT UK Recurrent Tonsillitis Decision Making Tool for Tonsillectomies (for adults and Children): https://www.england.nhs.uk/publication/decision-support-tool-making-a-decision-about-recurrent-tonsillitis-in-children-and-adults/   You can use these podcasts as part of your CPD - we don't do certificates but they still count :) ___ We really want to make these episodes relevant and helpful: if you have any questions or want any particular areas covered then contact us on Twitter @PCKBpodcast, or leave a comment on our quick anonymous survey here: https://pckb.org/feedback Email us at: primarycarepodcasts@gmail.com ___ This podcast has been made with the support of GP Excellence and Greater Manchester Integrated Care Board. Given that it is recorded with Greater Manchester clinicians, the information discussed may not be applicable elsewhere and it is important to consult local guidelines before making any treatment decisions.  The information presented is the personal opinion of the healthcare professional interviewed and might not be representative to all clinicians. It is based on their interpretation of current best practice and guidelines when the episode was recorded. Guidelines can change; To the best of our knowledge the information in this episode is up to date as of it's release but it is the listeners responsibility to review the information and make sure it is still up to date when they listen. Dr Lisa Adams, Dr Sara MacDermott and their interviewees are not liable for any advice, investigations, course of treatment, diagnosis or any other information, services or products listeners might pursue as a result of listening to this podcast - it is the clinicians responsibility to appraise the information given and review local and national guidelines before making treatment decisions. Reliance on information provided in this podcast is solely at the listeners risk. The podcast is designed to be used by trained healthcare professionals for education only. We do not recommend these for patients or the general public and they are not to be used as a method of diagnosis, opinion, treatment or medical advice for the general public. Do not delay seeking medical advice based on the information contained in this podcast. If you have questions regarding your health or feel you may have a medical condition then promptly seek the opinion of a trained healthcare professional.

Business Witch
How to Create Reliable Recurrent Revenue

Business Witch

Play Episode Listen Later Apr 28, 2025 37:44


In this solo episode, Cara reveals why chasing creative whims might be capping your income and why committing to a signature offer—backed by solid systems and data, not just passion—is crucial for reliable, recurrent revenue. She guides you to embrace the resilience needed to scale ethically, address the mindset blocks keeping you stuck, and build a business where liberation and impact pave the way to getting paid well for your sacred work.   Radical and Resourced:The class to give you the tools, tips, and resources you need to scale your service-based business in our current dystopian reality in a way that aligns with your values. Live on Zoom on May 6th and 7th  from 10 a.m. to 11.30 a.m. PST / 1 p.m. - 2.30 p.m. EST⁠Sign up here.⁠Business Witch The Course: This episode is brought to you by⁠⁠ Business Witch The Course⁠⁠Additional Resources:- ⁠⁠Learn about working with me and subscribe for business tips.⁠⁠- ⁠⁠Apply to be a 1:1 client.⁠⁠- ⁠⁠Follow me on Instagram!⁠

The Sports Docs Podcast
126: AAOS Annual Meeting Updates: Sleep & Orthopaedic Surgeons

The Sports Docs Podcast

Play Episode Listen Later Apr 21, 2025 10:19


Our next poster is titled Sleep in Orthopaedic Surgeons: A Prospective Longitudinal Study of the Effect of Home Call on Orthopedic Attending and Resident Sleep. Recurrent episodes of partial sleep deprivation resulting from call schedules are commonly seen in physicians. This has been shown to cause decreased mental effectiveness while at work, which corresponds with a blood alcohol level of 0.08%. Sleep deprivation has been associated with adverse personal health events, with an increased risk of diabetes, heart disease, stroke and risk of death. Additionally, sleep deprivation has been demonstrated to have a negative clinical impact, including decreased surgical performance, increased errors, and greater risks of accidents.Despite the known negative impacts of poor sleep, the effect of home orthopedic call on surgeon sleep has not been well quantified.  The purpose of the study was to quantify the impact of resident and attending physician home call on sleep performance – specifically total sleep, slow-wave sleep and rapid eye movement sleep – as well as heart rate variability. Sixteen orthopedic residents and 14 attendings at a level 1 academic trauma hospital wore WHOOP 3.0 straps for a period of 1 year. The WHOOP strap is wearable device that tracks all 4 stages of sleep and monitors wake events, efficiency and respiratory rate. The authors recorded total sleep, slow-wave sleep and REM sleep.  Slow-wave sleep is considered to be the most restorative sleep stage and plays an important role in growth, memory and immune function.This study showed that overall, attendings slept significantly less than residents, at 6 hours compared to 6.7 hours.  When on home call, resident total sleep decreased by 20%, REM sleep decreased by 12%, and slow-wave sleep decreased by 12%.  For attendings, total sleep on-call decreased by 10%, REM sleep decreased by 7% and slow-wave sleep decreased by 4%.The authors concluded that orthopedic surgery residents and attendings exhibit low baseline sleep, and taking home call reduces this even further.  On home call nights, Residents and Attendings experienced a significant decrease in total sleep, REM sleep and short wake sleep.  The authors suggested that further research is required in order to determine how to ensure excellent patient care, maximize educational environments and develop strategies for resilience.

The Orthobullets Podcast
Coinflips⎪Shoulder & Elbow⎪Recurrent Shoulder Dislocation s/p Latarjet in 20M

The Orthobullets Podcast

Play Episode Listen Later Apr 17, 2025 65:15


Welcome to Season 2 of the Orthobullets Podcast. Today's show is Coinflips, where expert speakers discuss grey zone decisions in orthopedic surgery. This episode will feature doctors John Tokish, Derek Papp, Joseph Abboud, & Desmond J. Bokor. They will discuss the case titled "⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Recurrent Shoulder Dislocation s/p Latarjet in 20M.⁠⁠⁠⁠" Follow ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Orthobullets⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ on Social Media:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedln⁠

Dr. C In The D
What's New in Male Fertility: Sperm QT, DFI, and Zymot Explained

Dr. C In The D

Play Episode Listen Later Apr 17, 2025 15:09


In this episode of Dr. C in the D, Dr. Carol Kowalczyk and Dr. Nicole Budrys dive into some of the most exciting advancements in the world of male fertility—shedding light on new testing options and lab techniques that are helping couples find answers and improve outcomes. With so many people turning to Facebook groups, online forums, and podcasts for fertility guidance, this conversation is all about clearing the air and explaining what's truly worth your time (and money) when navigating fertility treatments. From gene expression in sperm to cutting-edge lab tools, they unpack how these innovations support smarter, more personalized fertility care. What You'll Learn in This Episode: Sperm QT Test: A groundbreaking test that analyzes sperm gene expression to determine how well sperm can fertilize an egg. Ideal for: Unexplained infertility, long-standing TTC journeys, and deciding between IUI vs. IVF Key takeaway: A normal semen analysis doesn't always tell the whole story—Sperm QT can fast-track you to the treatment that's right for you. DFI (DNA Fragmentation Index): A test that measures DNA damage in sperm, which can impact embryo development and miscarriage risk. Often used in: Recurrent pregnancy loss evaluations Treatment options: Lifestyle changes, antioxidants, frequent ejaculation, or surgical evaluation for varicoceles. ZyMot for IVF: A lab technology that sorts and activates the healthiest sperm by mimicking the body's natural selection process. Why it matters: ZyMot has been linked to more and better-quality embryos, increasing chances of pregnancy while reducing the need for multiple IVF cycles. Why This Matters: Historically, fertility testing has focused heavily on women. But male factors can play just as crucial a role. These new tools give couples better answers, save time and money, and help create stronger embryos for today—and for future family planning. Whether you're at the beginning of your journey or have been trying for years, these tests offer clarity and direction for the path ahead. Mentioned in This Episode: Michigan Center for Fertility & Women's Health mifertility.com — For appointments, information, and resources Comprehensive fertility testing and IVF support Embryology lab implementing cutting-edge tools like ZyMot ✨ Next Episode Teaser: Stay tuned as Dr. Carol Kowalczyk and Dr. Nicole Budrys return to explore advancements in female fertility, from new treatment options to ways to enhance implantation success.

Primary Care Knowledge Boost
Recurrent Acute Otitis Media in Children

Primary Care Knowledge Boost

Play Episode Listen Later Apr 9, 2025 28:26


Episode one of four on Paediatric ENT conditions. Doctors Lisa and Sara are joined by Paediatric Ear Nose and Throat Consultant Dr Simone Schaefer for this episode on Recurrent Acute Otitis Media (AOM) in Children. A common problem, we take a classic presentation and work through getting the diagnosis right, red flags and differentials before discussing management and which children may need referrals. We then discuss the limited options of what might be done in an ENT clinic and helpful resources for families.   You can use these podcasts as part of your CPD - we don't do certificates but they still count :) Useful Resources: NICE Clinical Knowledge Summaries on Acute Otitis Media (including initial presentation, persistent infections and recurrent infections (updated August 2024): https://cks.nice.org.uk/topics/otitis-media-acute/ Hoberman et al. 2021 NEJM Tympanostomy tube placement or medical management for recurrent acute otitis media: https://www.nejm.org/doi/full/10.1056/NEJMoa2027278 Resource for Patients: https://www.nhs.uk/conditions/ear-infections/ https://www.hopkinsmedicine.org/health/conditions-and-diseases/ear-infections-in-babies-and-toddlers ENT UK: Decision making aid for parents re Grommets: https://www.entuk.org/patients/conditions/5/grommets_a_decisionmaking_aid_for_parents ENT UK: Explainer leaflets, How to use ear drops or sprays: https://www.entuk.org/patients/conditions/74/how_to_use_ear_drops_or_sprays The Royal Children's Hospital Melbourne. Clinical Paediatric Guideline (good algorithm, pictures of erythematous Tympanic Membranes versus Acute Otitis Media with bulging/effusion): https://www.rch.org.au/clinicalguide/guideline_index/acute_otitis_media/ ENT Guidelines for Derbyshire (includes details of Topical Drops in specific cases: https://www.derbyshiremedicinesmanagement.nhs.uk/assets/Clinical_Guidelines/Formulary_by_BNF_chapter_prescribing_guidelines/BNF_chapter_12/Chapter_12_Ear_nose_and_oropharynx.pdf ___ We really want to make these episodes relevant and helpful: if you have any questions or want any particular areas covered then contact us on Twitter @PCKBpodcast, or leave a comment on our quick anonymous survey here: https://pckb.org/feedback Email us at: primarycarepodcasts@gmail.com ___ This podcast has been made with the support of GP Excellence and Greater Manchester Integrated Care Board. Given that it is recorded with Greater Manchester clinicians, the information discussed may not be applicable elsewhere and it is important to consult local guidelines before making any treatment decisions.  The information presented is the personal opinion of the healthcare professional interviewed and might not be representative to all clinicians. It is based on their interpretation of current best practice and guidelines when the episode was recorded. Guidelines can change; To the best of our knowledge the information in this episode is up to date as of it's release but it is the listeners responsibility to review the information and make sure it is still up to date when they listen. Dr Lisa Adams, Dr Sara MacDermott and their interviewees are not liable for any advice, investigations, course of treatment, diagnosis or any other information, services or products listeners might pursue as a result of listening to this podcast - it is the clinicians responsibility to appraise the information given and review local and national guidelines before making treatment decisions. Reliance on information provided in this podcast is solely at the listeners risk. The podcast is designed to be used by trained healthcare professionals for education only. We do not recommend these for patients or the general public and they are not to be used as a method of diagnosis, opinion, treatment or medical advice for the general public. Do not delay seeking medical advice based on the information contained in this podcast. If you have questions regarding your health or feel you may have a medical condition then promptly seek the opinion of a trained healthcare professional.

PeerVoice Oncology & Haematology Video
Alon Altman, MD, FRCSC, CCPE - It Takes Two: Integrating Immunotherapy in Combination With Chemotherapy Into Recurrent or Primary Advanced Endometrial Cancer Care

PeerVoice Oncology & Haematology Video

Play Episode Listen Later Apr 2, 2025 9:28


Alon Altman, MD, FRCSC, CCPE - It Takes Two: Integrating Immunotherapy in Combination With Chemotherapy Into Recurrent or Primary Advanced Endometrial Cancer Care

The Sports MAP Podcast
Clinical Cases - Recurrent Soleus Strain

The Sports MAP Podcast

Play Episode Listen Later Apr 1, 2025


In this new installment for the How I Rehab Podcast, 'Clinical Cases' we chat with senior men's Rehabilitation Physiotherapist for the Sydny Swans Football Club, […] The post Clinical Cases - Recurrent Soleus Strain first appeared on The Sports MAP Network.

BJUI - BJU International
BJUI/BURST: Efficacy of direct visual internal urethrotomy versus balloon dilation to treat recurrent urethral stricture following failed urethroplasty

BJUI - BJU International

Play Episode Listen Later Mar 24, 2025 4:54


Part of the BJUI/BURST podcast series In this BJUI/BURST podcast, Saad Masood, who is an SHO in urology department in York Hospital, discusses the BJUI Compass paper " Efficacy of direct visual internal urethrotomy versus balloon dilation to treat recurrent urethral stricture following failed urethroplasty" BJUI Compass is the fully open access sister title to BJU International. You can read the paper discussed in this podcast here https://bjui-journals.onlinelibrary.wiley.com/doi/10.1002/bco2.458

The Sports MAP Podcast
#42 How I Rehab: Recurrent & T-Junction Hamstring Injuries

The Sports MAP Podcast

Play Episode Listen Later Mar 21, 2025


In this episode of the How I Rehab podcast by Sports MAP we chat with Fearghal Kerin, hamstring injury consultant at Kerin Performance. Fearghal holds […] The post #42 How I Rehab: Recurrent & T-Junction Hamstring Injuries first appeared on The Sports MAP Network.

Batteries Included
36: How Recurrent Monitors Your EV Battery's Health And Makes EV Buying Better

Batteries Included

Play Episode Listen Later Mar 19, 2025 35:08


We talk with Recurrent founder and CEO Scott about how it helps owners manage electric vehicle battery health, as well as give you insight into the battery health of an EV you're considering buying. And more!

NetWorth Radio
NetWorth Radio's Texas Global Business Leadership Series: Spencer McGowan Interviews Brad Olsen and Oliver Doolin from Recurrent Advisors in Houston! Is Texas Poised to Become the Energy Capital of the World?

NetWorth Radio

Play Episode Listen Later Mar 17, 2025 12:21


Oh, My Health...There Is Hope!
Empowering Women's Health: Tackling Recurrent UTIs with Melissa Kramer

Oh, My Health...There Is Hope!

Play Episode Listen Later Mar 15, 2025 23:07


"Quality of life means something different for everybody. It's about how we can get patients to that place where they feel like they have a life again." - Melissa Kramer Melissa Kramer is a pioneering entrepreneur and leading patient advocate dedicated to enhancing women's health research, particularly in the domain of pelvic health. Melissa is the founder of Live UTI Free, an influential online community and education platform that connects patients, clinicians, and researchers with a focus on integrating the patient perspective into clinical studies. Her mission centers around addressing the gender gap in health research and improving the understanding and treatment of urinary tract infections (UTIs) in women. With over eight years of experience, Melissa continues to impact the field through engagement with scientific communities worldwide and pioneering patient-focused research initiatives. Episode Summary: Jana Short hosts another insightful episode of "Oh My Health, There Is Hope," featuring Melissa Kramer, an advocate spearheading the transformation of women's health research. Melissa shares her personal health journey that inspired her to found Live UTI Free, a platform dedicated to connecting UTI sufferers with the research and clinical support they need for better health outcomes. Her experience with recurrent UTIs and inadequate medical advice opened a path to empowering others through collective patient advocacy and improved healthcare research. In this insightful discussion, the conversation focuses on the alarming inaccuracies in current UTI testing methods, which Melissa reveals can miss up to 50% of infections. The episode delves into groundbreaking research about the urinary microbiome and the critical need for reform in diagnosing and treating UTIs. Melissa emphasizes the necessity for patients, particularly women, to take charge of their health by demanding thorough investigations and advocating for better testing. As the dialogue unfolds, Melissa shares the dynamic resources available through Live UTI Free, designed to enhance patient knowledge and advocacy while fostering a supportive community driven by scientific advancement and personal empowerment. Key Takeaways: Standard UTI tests can be inaccurate, missing up to 50% of infections, and Melissa's work aims to advance better testing methods. The urinary microbiome, discovered in 2014, holds the key to understanding UTIs, offering new avenues for diagnosis and treatment. Hormonal factors, particularly around menopause, significantly impact the recurrence of UTIs—an area receiving increased attention in Melissa's research. Live UTI Free provides essential resources, including a patient checklist and educational materials for clinicians, to better advocate for patient health. Melissa emphasizes the vital importance of creating a community where UTI sufferers feel less isolated and more empowered in their health journeys. Get in touch for free resources and information about clinicians who specialize in recurrent and chronic UTI: https://liveutifree.com/contact Resources https://liveutifree.com https://www.youtube.com/@liveutifree https://www.instagram.com/liveutifree/ https://www.facebook.com/liveUTIfree https://www.linkedin.com/company/live-uti-free/ Get in touch with Jana and listen to more podcasts: https://www.janashort.com/ Show Music ‘Hold On' by Amy Gerhartz: https://www.amygerhartz.com/music. Get the Best Holistic Life Magazine Subscription! One of the fastest-growing independent magazines centered around holistic living. https://bestholisticlife.info/Subscription Grab your gift today: https://www.janashort.com/becoming-the-next-influencers-download-offer/ Connect with Jana Short: https://www.janashort.com/contact/

The Sports MAP Podcast
The Recurrent Hamstring - Clinical Chats

The Sports MAP Podcast

Play Episode Listen Later Mar 14, 2025


This new installment for the How I Rehab Podcast, 'Clinical Chats' sees the Sports MAP team discussing some of the latest content they have accessed […] The post The Recurrent Hamstring - Clinical Chats first appeared on The Sports MAP Network.

Sky Women
Episode 197: Male partner treatment to prevent recurrent BV

Sky Women

Play Episode Listen Later Mar 9, 2025 11:46


In this episode, Dr. Carolyn Moyers dives into groundbreaking research published in The New England Journal of Medicine (March 6, 2025), shedding new light on bacterial vaginosis (BV) management. A randomized controlled trial from Australia, involving 164 couples, found that treating male partners significantly reduced BV recurrence rates by half over 12 weeks.This study challenges conventional treatment approaches and suggests BV may function more like a sexually transmitted infection (STI) than previously thought. Dr. Moyers discusses the implications for clinical practice, the ongoing challenges in managing persistent and recurrent BV, and the broader health risks associated with bacterial vaginosis.

JAMA Network
JAMA Neurology : Location and Timing of Recurrent, Nontraumatic Intracerebral Hemorrhage

JAMA Network

Play Episode Listen Later Mar 3, 2025 15:57


Interview with David J. Seiffge, MD, author of Location and Timing of Recurrent, Nontraumatic Intracerebral Hemorrhage. Hosted by Cynthia E. Armand, MD. Related Content: Location and Timing of Recurrent, Nontraumatic Intracerebral Hemorrhage

JAMA Neurology Author Interviews: Covering research, science, & clinical practice in the structure and function of the nervou

Interview with David J. Seiffge, MD, author of Location and Timing of Recurrent, Nontraumatic Intracerebral Hemorrhage. Hosted by Cynthia E. Armand, MD. Related Content: Location and Timing of Recurrent, Nontraumatic Intracerebral Hemorrhage

The Orthobullets Podcast
Coinflips⎪Recon⎪Painful TKA with Recurrent Tibial Loosening in 62F

The Orthobullets Podcast

Play Episode Listen Later Feb 18, 2025 69:09


Welcome to Season 2 of the Orthobullets Podcast. Today's show is Coinflips, where expert speakers discuss grey zone decisions in orthopedic surgery. This episode will feature doctors Bryan Springer, Michael Taunton, William Long, & Anna Cohen-Rosenblum. They will discuss the case titled "⁠⁠⁠⁠Painful TKA with Recurrent Tibial Loosening in 62F." Follow ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Orthobullets⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ on Social Media:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedln⁠

The Healthy Celiac Podcast
Exploring the Surprising Link Between Celiac Disease and Recurrent UTIs

The Healthy Celiac Podcast

Play Episode Listen Later Feb 17, 2025 9:20 Transcription Available


Send a one-way text message. Ask a Question or message me your feedback. Be sure to leave your name too if you'd like a shoutout on the Podcast.If you've ever suffered from a urinary tract infection (UTI), you know how painful and frustrating it can be. But did you know that UTIs can sometimes be linked to celiac disease and accidental gluten exposure?In this episode, I dive into why this happens, how gluten triggers inflammation that may lead to UTIs, and what you can do to prevent them. I also share natural remedies that can help ease symptoms and support healing, alongside medical treatment.If you're experiencing frequent UTIs and can't figure out why, this episode could be the missing piece of the puzzle. Tune in to learn how to protect your bladder health while staying committed to your gluten free lifestyle!Find out how Ultimate Celiac System can support your Celiac journey here  https://belindawhelantraining.com/ultimate-celiac-systemWish you could get gluten free meals on the table fast that the whole family will love? Check out Meal Plans Made Easyhttps://belindawhelantraining.com/gluten-free-meal-plans-made-easyJoin my free community and grab your copy of 11 Mistakes People Make Living Gluten Free here https://www.belindawhelan.myflodesk.com/11mistakesCheck out my Daily Health Tracker herehttps://www.belindawhelan.com/dailyhealthtrackerAnd I would love to connect with you on Instagram thehealthyceliac If you have a spare moment, please pop over to Apple Podcasts and leave me a review. Thank you!  Music Credit bensound.com

Between Two Lips
Help For Recurrent UTI's with Dr Carley Akehurst

Between Two Lips

Play Episode Listen Later Feb 12, 2025 55:19


Dr Carley Akehurst is a dedicated naturopathic doctor and clinic founder with a passion for integrating traditional and modern medical practices to provide holistic care. Her approach to patient care emphasizes education and empowerment, allowing individuals to achieve balance in their lives through personalized treatment plans. With 13 years of clinical practice experience, Dr. Akehurst now focuses almost entirely on helping patients end the cycle of chronic and recurrent UTIs. In 2022, she opened the clinic she always wanted to work in. Though Dr. Akehurst opened my doors on my own and was the only patient care provider at the start, she now runs a dynamic and diverse team of twelve exceptional practitioners. Outside of the practice, Dr. Akehurst is a mother of three young children and often found in the forest, at the beach, or on the ski hill.https://doctorakehurst.com/https://www.facebook.com/doctorakehurst/https://www.instagram.com/drcarleyakehurst/?hl=en____________________________________________________________________________________Moisturize Your Vaginahttps://www.feel-amazing.com/?ref=vaginacoachJoin The Buff Method and get the 28 day challenge for free https://go.buffmuff.com/method?utm_source=cf-redirect&utm_medium=organic&utm_campaign=organicThank you so much for listening! I use fitness and movement to help women prevent and overcome pelvic floor challenges like incontinence and organ prolapse. There is help for women in all life stages! Every Woman Needs A Vagina Coach! Please make sure to LEAVE A REVIEW and SUBSCRIBE to the show for the best fitness and wellness advice south of your belly button. *******************I recommend checking out my comprehensive pelvic health education and fitness programs on my Buff Muff AppYou can also join my next 28 Day Buff Muff Challenge https://www.vaginacoach.com/buffmuffIf you are feeling social you can connect with me… On Facebook https://www.facebook.com/VagCoachOn Instagram https://www.instagram.com/vaginacoach/On Twitter https://twitter.com/VaginaCoachOn The Web www.vaginacoach.comGet your Feel Amazing Vaginal Moisturizer Here

Grad Chat - Queen's School of Graduate Studies
Christina Ferazzutti (Biomedical & Molecular Sciences) – Why One Complicated Pregnancy Can Lead to Another: The Role of Immune Memory

Grad Chat - Queen's School of Graduate Studies

Play Episode Listen Later Feb 5, 2025 35:39


Recurrent pregnancy loss (RPL) is a significant complication linked to uncontrolled inflammation, which not only causes immediate distress but also heightens risks in future pregnancies. It is hypothesized that inflammation during pregnancy induces long-term changes in maternal immune cells, altering their responses in subsequent pregnancies and increasing complications. For upcoming interviews check out the Grad Chat webpage on Queen’s University School of Graduate Studies & Postdoctoral Affairs website

Auto Remarketing Podcast
The great EV debate & plotting a path to 2030

Auto Remarketing Podcast

Play Episode Listen Later Jan 21, 2025 32:58


We continue our episodes of the Auto Remarketing Podcast originating from Used Car Week 2024 in Scottsdale, Ariz., with a lively debate about the future of electric vehicles. Moderated by Cherokee Media Group's Joe Overby, the opposing points of view on EVs came from Steve Greenfield of Automotive Ventures and Scott Case of Recurrent.

Your Journey to Fertility
62: Kari's Journey Through Recurrent Loss + Secondary Infertility

Your Journey to Fertility

Play Episode Listen Later Jan 9, 2025 43:19


On today's episode, we explore the journey of one amazing woman's path to building her family. Through loss, recurrent loss, secondary infertility & so much heartache, this is a story of hope for anyone who needs to hear it. I absolutely loved hearing others' fertility stories when I was struggling to conceive (and I especially loved hearing the ones that had happy endings...spoiler alter!).I hope this provides you with a little inspiration as we start a new year.By the time you finish listening, you'll find out: How Kari navigated her own path through recurrent loss & infertilityWhat she did differently in the weeks leading up to her positive pregnancy testHow she was able to re-connect with her body & trust in her ability to have her familyJanuary is a VERY exciting time at Element Pilates & Yoga - at the end of this month I will be hosting a FREE 3-Day Live Fertility Yoga series event where you can come & meet me, enjoy some of the beautiful practices of Fertility Yoga, Meditation & Breathwork - and learn how these tools can support YOU in 2025.At the close of this series, you'll have the opportunity to join me inside the In Your Element Fertility Yoga Program which is the most comprehensive mind-body fertility program available. Here you'll learn exactly how to use these practices to regulate your nervous system, synchronise your hormones & support your fertility throughout each phase of your cycle. Doors will be open for a very limited time to join, so jump on the waitlist today!This JanuaryWhen you finish listening, I'd love to hear your biggest takeaway from today's episode. Take a screenshot of you listening on your device, share it to your Instagram stories and tag me, @jen.elementpilatesyoga Join me this January 28th for a FREE 3-Day Live Fertility Yoga Series to guide you through my exact process to: Regulate your nervous system Support your fertility Sync with your cycle & synchronise your hormones This is the exact process I have taken 100s of women through along their journeys to motherhood and beyond. And I can't wait to show you how.Visit www.elementpilatesyoga.com/waitlist for all the details & to register!

Origin Stories
Top Human Origins Discoveries of 2024

Origin Stories

Play Episode Listen Later Dec 24, 2024 37:27


2024 was another amazing year in human origins research. In this episode, three Leakey Foundation grantees (and one podcast host) share their picks for the most exciting discoveries of the year. Support this show and the science we talk about. Your tax-deductible gift to The Leakey Foundation will be quadruple-matched through midnight on December 31! Click here to donate.  Want more science between podcast episodes? Join our monthly newsletter for human origins news and updates from Origin Stories and The Leakey Foundation. Links to learn more All research articles are open-access and free to read On the genetic basis of tail-loss evolution in humans and apes Why don't humans have tails? Scientists find answers in an unlikely place Long genetic and social isolation in Neanderthals before their extinction Meet Thorin: A cave-dwelling population of Neanderthals isolated for 50,000 years Recurrent evolution and selection shape structural diversity at the amylase locus How early humans evolved to eat starch Footprint evidence for locomotor diversity and shared habitats among early Pleistocene hominins Fossilized footprints reveal two extinct hominin species living side by side 1.5 million years ago  

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

The Operative Word from JACS
E29: Refractory and Recurrent Idiopathic Granulomatous Mastitis Treatment: Adaptive, Randomized Clinical Trial

The Operative Word from JACS

Play Episode Listen Later Dec 19, 2024 25:51 Transcription Available


In this episode, Lillian Erdahl, MD, FACS, is joined by Fatemeh Shojaeian, MD, MPH, from the Johns Hopkins University School of Medicine. They discuss Dr Shojaeian's recent article, “Refractory and Recurrent Idiopathic Granulomatous Mastitis Treatment: Adaptive, Randomized Clinical Trial,” in which the authors found that, for resistant or relapsing patients with idiopathic granulomatous mastitis, combining methotrexate and corticosteroids offers a promising strategy. This integration of disease-modifying antirheumatic drugs with corticosteroids not only reduces the necessity for high steroid doses but also effectively alleviates associated side effects.   Disclosure Information: Drs Erdahl and Shojaeian have nothing to disclose.   To earn 0.25 AMA PRA Category 1 Credits™ for this episode of the JACS Operative Word Podcast, click here to register for the course and complete the evaluation. Listeners can earn CME credit for this podcast for up to 2 years after the original air date.   Learn more about the Journal of the American College of Surgeons, a monthly peer-reviewed journal publishing original contributions on all aspects of surgery, including scientific articles, collective reviews, experimental investigations, and more.   #JACSOperativeWord

Sisters in Loss Podcast: Miscarriage, Pregnancy Loss, & Infertility Stories
375 - Recurrent Miscarriages and Infertility with Vernetta Stewart

Sisters in Loss Podcast: Miscarriage, Pregnancy Loss, & Infertility Stories

Play Episode Listen Later Dec 18, 2024 27:46


Black women Heal when they are surrounded by other black women. Today's guest truly believes and embodies that motto. Vernetta Stewart, also known as Chef Vernetta is the founder of Footprints of Angels a nonprofit organization aimed to support women impacted by infertility and recurrent miscarriages. This nonprofit organization provides women with support so they can no longer live in silence and shame. In this episode Vernetta takes us on her journey of recurrent miscarriages and becoming the mom of 3 Angel babies. She also shares how her non profit is helping women like her with fertility challenges. This episode is for you to listen to if you want to know what life after loss looks like. Become a Sisters in Loss Birth Bereavement, and Postpartum Doula Here Living Water Doula Services Book Recommendations and Links Below You can shop my Amazon Store for the Book Recommendations You can follow Sisters in Loss on Social Join our Healing Collective Online Support Group Join the Sisters in Loss Online Community Sisters in Loss TV Youtube Channel Sisters in Loss Instagram Sisters in Loss Facebook Sisters in Loss Twitter You can follow Erica on Social Erica's Website Erica's Instagram Erica's Facebook Erica's Twitter

Smooth Stones
182 Coaching During Recurrent Loss with Summer

Smooth Stones

Play Episode Listen Later Dec 13, 2024 57:18


In this heartfelt episode, Amy sits down with her client, Summer, to discuss the profound journey they've shared through multiple miscarriages. Summer reflects on her experiences of loss, the lessons she learned from her three angel babies, and the impact of her recurrent miscarriages on her emotional and physical well-being. From navigating deep grief and depression to finding solace in coaching and the decision to seek help from a fertility clinic, Summer's story is a testament to resilience, self-discovery, and the importance of giving oneself grace and understanding during times of profound loss and uncertainty. Join Summer and Amy as they delve into the highs and lows of pregnancy after loss, the emotional hurdles of trying again, and the compassionate strategies that helped Summer cope and find hope. Through honest and raw conversation, they explore themes of empathy, self-trust, the importance of knowing when to take a break, and creating a supportive environment to process grief and prepare for new beginnings. This episode is a poignant reminder that even in the darkest times, there is potential for healing, connection, and ultimately, a renewed sense of hope and purpose.   Get support from Amy! Click HERE Follow me on Instagram! @amy.smoothstonescoaching Visit my website. Photo provided by Summer, used with permission. Music by ZingDog on Pond5   00:00 Introduction and Client Praise 02:21 Summer's Angel Babies 05:27 Lessons from Loss 09:27 Struggles and Seeking Support 14:20 Finding Hope Through Podcasts 18:59 The Value of Coaching 22:25 Trusting Yourself and Taking Care 26:40 Navigating Social Expectations 29:31 Balancing Self-Care and Compassion 30:31 Facing Judgment and Letting Go 32:39 Pregnancy After Loss: A Personal Journey 36:27 Navigating Medical Advice and Personal Choices 41:07 Emotional Hibernation and Finding Clarity 49:02 Current Pregnancy and Managing Anxiety 55:02 Message of Hope and Conclusion

Fearlessly Fertile
A Fearlessly Fertile Special: Unexplained Infertility Isn’t A Diagnosis: Getting to the Root Cause of Fertility Issues and Recurrent Miscarriage, A Conversation with Caryn Johnson, CEO of BOND

Fearlessly Fertile

Play Episode Listen Later Dec 12, 2024 79:22


Who doesn’t love supporting products that are based on a heart-based purpose and mission? This is why I invited Caryn Johnson, Founder and CEO of BOND on to the podcast. Having lived her own fertility journey, Caryn, former CMO of Vital Proteins is passionate about helping women support their health by balancing their hormones and […] The post A Fearlessly Fertile Special: Unexplained Infertility Isn’t A Diagnosis: Getting to the Root Cause of Fertility Issues and Recurrent Miscarriage, A Conversation with Caryn Johnson, CEO of BOND appeared first on Rosanne Austin.

SciShow Tangents
Dreams with Trace Dominguez!

SciShow Tangents

Play Episode Listen Later Dec 10, 2024 53:29


Is a dream really a wish your heart makes? Or is it just that your brain is an organ that never really turns off? What do your dreams even mean??? These are just some of the perplexing questions this episode posed to us and our special return guest, Trace Dominguez!SciShow Tangents is on YouTube! Go to www.youtube.com/scishowtangents to check out this episode with the added bonus of seeing our faces! Head to www.patreon.com/SciShowTangents to find out how you can help support SciShow Tangents, and see all the cool perks you'll get in return, like bonus episodes and a monthly newsletter! A big thank you to Patreon subscriber Garth Riley for helping to make the show possible!And go to https://store.dftba.com/collections/scishow-tangents to buy some great Tangents merch!Follow us on Twitter @SciShowTangents, where we'll tweet out topics for upcoming episodes and you can ask the science couch questions! While you're at it, check out the Tangents crew on Twitter: Ceri: @ceriley Sam: @im_sam_schultz Hank: @hankgreen[Definition]https://pmc.ncbi.nlm.nih.gov/articles/PMC2814941/https://www.scientificamerican.com/article/how-to-control-dreams/  [The Scientific Definition]Dreambookshttps://www.cabinetmagazine.org/issues/67/vandegrift.php#:~:text=Popular%20in%20Europe%20since%20antiquity,game%20then%20sweeping%20Northeastern%20cities.https://web.archive.org/web/20151222092158/http://www.eastm.org/index.php/journal/article/viewFile/146/134https://www.obafemio.com/uploads/5/1/4/2/5142021/dream_interpretation_in_ancient_china.pdfIncubationhttps://publishing.cdlib.org/ucpressebooks/view?docId=ft5j49p06s&chunk.id=d0e2624&toc.id=&brand=ucpress#:~:text=Common%20throughout%20all%20antiquity%2C%20the,some%20divinely%20inspired%20dream%20vision.https://www.britannica.com/topic/dream-sleep-experience/Dreams-as-a-source-of-divination#ref984709https://www.dreamscience.ca/en/documents/New%20content/incubation/Incubation%20overview%20for%20website%20updated.pdfOminous-Vapor Watcherhttps://www.obafemio.com/uploads/5/1/4/2/5142021/dream_interpretation_in_ancient_china.pdfhttps://www.britannica.com/topic/Zhouli[Trivia Question]Rapid eye movement (REM) saccade speedhttps://www.sciencedirect.com/science/article/abs/pii/B9780080450469010895https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-8986.1985.tb01551.xhttps://onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-8986.1983.tb03008.xhttps://pubmed.ncbi.nlm.nih.gov/9406327/[Fact Off]Approximating dreams with generative AIhttps://www.sciencefocus.com/the-human-body/heres-how-ai-could-soon-decode-your-dreamsLucid dreaming and the effects of video games on dreamshttps://academic.oup.com/nc/article/2017/1/nix009/3859602https://theconversation.com/im-a-lucid-dream-researcher-heres-how-to-train-your-brain-to-do-it-118901https://psycnet.apa.org/record/2014-00817-001https://psycnet.apa.org/doiLanding?doi=10.1037%2F1053-0797.16.2.96https://psycnet.apa.org/record/2014-00817-001https://psycnet.apa.org/record/2008-19013-002https://www.theverge.com/2014/1/21/5330636/video-games-effect-on-dreams[Ask the Science Couch]Neuroscience of fever dreams https://pmc.ncbi.nlm.nih.gov/articles/PMC3830719/https://journals.ub.uni-heidelberg.de/index.php/IJoDR/article/view/28492https://pmc.ncbi.nlm.nih.gov/articles/PMC6997236/https://www.sciencedirect.com/science/article/pii/S0033318268718077?via%3Dihubhttps://www.accjournal.org/journal/view.php?number=1528 Patreon bonus: Recurring dream content and possible psychologyhttps://psycnet.apa.org/record/2010-23497-001https://www.researchgate.net/profile/Antonio-Zadra/publication/232509978_Recurrent_dreams_Their_relation_to_life_events/links/53d673f10cf220632f3da1f7/Recurrent-dreams-Their-relation-to-life-events.pdf  https://www.sciencedirect.com/science/article/pii/S1053810005000772?casa_token=Dofy4I_w2PsAAAAA:DdZ6qAtKJiS6OEE3Iu8pETHldBs5n1SH3lvSQl6WuCNVv9Xi8v09wuR9bWki5YROcyKWXAZ3CcN3https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2018.01812/fullhttps://psychiatryonline.org/doi/10.1176/ajp.134.12.1335[Butt One More Thing]Psychoanalyst Hans Thorner documenting a patient who dreamed of butt spidershttps://bgsp.edu/app/uploads/2014/12/Blechner-M-Patients-dreams-and-the-countertransference.pdfhttps://www.taylorfrancis.com/chapters/edit/10.4324/9780429477546-12/three-defences-inner-persecution-hans-thorner

Fertility Wellness with The Wholesome Fertility Podcast
EP 315 What to Focus on if You're Trying to Conceive After 40 | Dr. Marc Sklar

Fertility Wellness with The Wholesome Fertility Podcast

Play Episode Listen Later Dec 10, 2024 48:26


In this episode of The Wholesome Fertility Podcast, I sit down with Dr. Marc Sklar to delve into evolving perspectives on fertility, especially for women over 40. We discuss the need to shift our focus from quantity to quality in fertility treatments, and the empowering impact this has on women navigating their fertility journeys. We cover the realities of IVF, the importance of patience, self-advocacy, and creating space for personal growth and healing. Marc and I also explore complex factors such as genetics, autoimmune issues, and male-related factors in recurrent pregnancy loss. This conversation is full of valuable insights for anyone on their fertility journey, promoting a holistic approach to healing and growth.   Takeaways   A shift in mindset is crucial for couples seeking fertility care after 40. Quality of eggs and embryos becomes more important than quantity as women age. Understanding hormones is important, but shouldn't be the sole focus. Regular ovulation is a key indicator of fertility, regardless of age. Real-life success stories provide hope and perspective for those trying to conceive. Patients should feel empowered to advocate for themselves in medical settings. IVF is not a guaranteed solution and should not be the first option considered. Donor eggs can be a valuable option, but should not be the first recommendation based solely on age. The energetics of fertility are crucial for healing. Recurrent pregnancy loss can stem from various factors, including genetics and autoimmune issues. Male factors contribute to 50% of miscarriages, often overlooked. The importance of the uterine environment in fertility cannot be ignored. Quick fixes are a societal conditioning that impacts health decisions. Understanding the microbiome can enhance fertility treatments. Emotional states can significantly affect physical health and fertility.   Be sure to check out our Fertility Empowerment Holiday Bundle here https://www.michelleoravitz.com/fertilityempowermentbundle before it's gone!   Guest Bio:   Dr. Marc Sklar — a.k.a The Fertility Expert — is a natural fertility specialist helping couples get pregnant for 21 years. He's mission is to help you feel HOPEFUL and CONFIDENT about your fertility journey again.    In addition to his Doctor of Acupuncture and Oriental Medicine, Dr. Sklar trained at the Harvard Medical School, Mind/Body Medical Institute. He is the creator of Fertility TV, MarcSklar.com and ReproductiveWellness.com, and a Fellow of the American Board of Oriental Reproductive Medicine and Medical Advisor for Symphony Natural Health.   As well as his online program, he also supports his community via his highly popular YouTube channel: FertilityTV where he shares information packed videos to educate his followers on all things fertility.    The Fertility Expert lives in San Diego, with his wife and two sons, where he has his clinic Reproductive Wellness. He also works with couples all over the world through his fertility online coaching - the Hope Fertility Program.   FERTILITY TV WEEKLY EPISODE - http://bit.ly/thefertilityexpert FACEBOOK - https://www.facebook.com/thefertilityexpert INSTAGRAM - https://www.instagram.com/the_fertility_expert/     For more information about Michelle, visit: www.michelleoravitz.com   Be sure to check out our Fertility Empowerment Holiday Bundle here https://www.michelleoravitz.com/fertilityempowermentbundle before it's gone!   Click here to get free access to the first chapter in The Way of Fertility Book! https://www.michelleoravitz.com/thewayoffertility   The Wholesome FertilityFacebook group is where you can find free resources and support: https://www.facebook.com/groups/2149554308396504/   Instagram: @thewholesomelotusfertility   Facebook: https://www.facebook.com/thewholesomelotus/     Transcript:   Michelle (00:00) Welcome back to the podcast, Dr. Scalari.   Marc Sklar (00:03) Welcome, well, thank you for having me. It's automatic. But no, it's awesome to reconnect. It's been a while and I'm excited to have a conversation that we both are passionate about, which is everything fertility.   Michelle (00:07) I know it's automatic. Yes.   For sure. We're like, you could say we're a little obsessed, right? With fertility. It's like, live it, we breathe it, So awesome. actually today we're going to talk about a couple of different topics, but I wanted to talk to you about pregnancy after 40. Cause I know that a lot of what we hear out there, even about,   Marc Sklar (00:25) 100%. Yeah, absolutely. Yeah.   Michelle (00:46) how even after 35, it's considered a geriatric pregnancy, which I don't know about you. just don't love that term at all. It's icky. yeah, because I'm sure you see most people like closer to the age of 40 and doing really well. So I'd love for you to talk about it, your experience with that, and also some really cool examples of how it can work despite all of the naysayers.   Marc Sklar (00:55) Yeah, not a fan of it.   Mm-hmm   Yeah, so I think, I so many things I want to talk about when you say this that I need to prioritize it in a good way. here's a couple of things that I think are really important. One is, I think our perspective about fertility, and when I say our, not ours as practitioners and doctors and providers, but more like ours as in like,   the couple who is seeking care during this time and wanting to get pregnant in their 40s, I think a mind shift has to happen. And I think that they need to think about their fertility in a slightly different way. If we are thinking about our fertility and reading everything that is really focused on   couples that are 30 or 35 or whatever age in their 30s, then we're going to be skewed differently about our own fertility and our approach needs to be different. And so I say that in the sense that, you know, we have to have a different reality of what is okay and what we're trying to achieve. A woman who's in her 30s is trying to get as many eggs as possible.   Michelle (02:43) Mm-hmm.   Marc Sklar (02:43) So they have as many options when they have their embryos created and they are, you know, it's usually more about in general and this is a making a generalization, but it's more about quantity versus quality. We're like, let's have as many as we have so that we can choose the best quality of those and then we can move forward, you know, with our pregnancy. And...   the approach may or may not be in those situations about egg quality, because there might be other variables that are impacting their ability to conceive. Whereas I think when we are 40 and older, my approach really shifts. I don't care about quantity. I'm really, really focused on quality. And I think that mindset has to be different as a couple.   because then we were not as disappointed like, I didn't get that many follicles and they didn't retrieve as many eggs as I would have hoped. But because that's all we hear about. We hear about, look, we need all these eggs, we need all these embryos. But the reality is, is when we get older, I don't need 20 eggs or 20 embryos, I need a couple good ones. That's really what I'm looking for is a couple good embryos.   to work with and to transfer. So I think really a mind shift needs to happen and our perspective on fertility needs to change. And so for reading and understanding things as if we were 30 versus 40 or older, then we're gonna have, I believe, skewed perspective on our fertility journey. So that to me is number one. Number two is we do all get caught up in our hormones and some of that is appropriate and some of that is not appropriate.   Is it appropriate to understand where our hormones are at when we're at any age? 100%. Is it important to understand what our estrogen is doing and what our progesterone and FSH are doing? Absolutely. Is it important to know what our AMH is? Yes. Should we get caught up in AMH and make our whole focus about AMH? No.   The research doesn't promote, doesn't support these variables. Even FSH, AMH are not good indicators for a couple's ability to conceive and have a healthy pregnancy. Are they important for us to just have a baseline and understand? Yes. Will they potentially or can they potentially influence your IVF protocol? Yes.   But that doesn't mean we as couples need to get wrapped up in those numbers and make our fertility all about that because it shouldn't be. My rule of thumb is are you having a regular cycle? Check. Are you ovulating regularly? Check. Is your bleed healthy? Check. You can conceive.   Michelle (05:40) Mm-hmm.   Marc Sklar (06:00) Do we have to look at these other variables? Do we need to check your thyroid? Do we need to work on your adrenal glands and stress? Do we need to make sure your gut is healthy? Do we need to make sure all the systems are functioning properly? Seem analysis is good. Fallopian tubes are open. All of those things are still important. But the main thing that as long as you're ovulating, you can get.   And I think that's a really important piece. Now, we're not talking about IVF or not IVF right now. It's just like conception at 40, right? And or older. And so I think if we just focus on the right things and don't get bogged down by these little details of someone who might approach things a little differently if they were 30, then our approach will be better. It will be healthier.   Michelle (06:37) Mm-hmm.   Marc Sklar (06:57) you'll be more grounded in your approach. And we could focus on the areas that really need attention and support. And so I think that piece is really important as we are in our 40s, approaching fertility, still wanting to conceive. If we're always comparing ourselves to other women and other circumstances, we're gonna lose sight of what we need to do and always be trying to like catch up or do what they're doing. And I think that is...   That can really push us down the wrong road. I say this because truly I work with so many women who are over 40. And I see this time and time again. So it's coming from a lot of experience working with women over 40. And I have a wonderful story to share of a woman who is, and everyone will gasp when they hear, okay, when she conceived she was 48.   Michelle (07:55) That's awesome. I love that.   Marc Sklar (07:55) She is, I just spoke to her two days ago. When she delivers, she will be 49. Okay? And I'm not saying she didn't have a long journey.   Michelle (08:08) Was this natural or was it IVF?   Marc Sklar (08:11) This time was natural, but I'm not saying she didn't have a long journey. She did. I'm not saying it was easy. It was not. It was a long journey. It was difficult. Miscarriages, conceiving naturally, conceiving through IVF, long IVF protocols, multiple clinics, like all these things. So it wasn't easy. It was long, but she's 32 weeks pregnant right now.   Michelle (08:40) Wow, amazing.   Marc Sklar (08:41) And I say that because it's possible. It can happen. And these are the sorts of things we see on a regular basis. I'm not saying it's easy at 48, not at all. But I say that for some perspective on the process. Okay. And I think that, you know, do I think everyone could last for seven plus years trying? No, I don't think that's for everybody. She was never going to give up.   Michelle (08:51) Mm-hmm.   Marc Sklar (09:11) Like regardless, like she was never going to stop and never give up until she was pregnant. And that's what she told me. She's like, I'm not going to stop and I'm determined. I was like, okay, I'll support you. Right. That, that, that process is not for everybody. Some people will be on it for a year or just have one or two IVF transfers. And they're like, this is too much. I'm done. I'm going to move on. And I respect everybody's path in that process, but   Michelle (09:21) Wow, amazing. Yeah.   Right.   Marc Sklar (09:39) I want everyone to know it's possible and that's why I share that story. I think it's possible regardless of age with the right support and the right process and the right focus of our attention.   Michelle (09:51) I love that. I really do. And I love the stories because I think that there's so many people that can benefit and you have that sign hope in the background. And it's true. Like those are, but stories, real life stories, there's nothing like real life stories to provide real hope. Cause you can hear, you know, there's a chance of this or a chance of that. But when you actually see an example of somebody going through those challenges that you are and having a successful pregnancy,   Marc Sklar (10:00) Yeah.   Michelle (10:21) I think that there's nothing that compares to that.   Marc Sklar (10:24) Yeah, absolutely. And I love to bring in stories wherever possible. And she was just at top of mind because I just booked her two days ago. So yeah.   Michelle (10:33) That's awesome. You know what I find really cool is the Guinness Book of World Records, the oldest pregnancy is 58 and it was natural. And it was a woman in England who, you know, in England, they don't have a lot of sunlight and, know, and vitamin D access naturally. So I thought that was really cool. But it's, it could be done. It's possible. Just like you said, and I love that you said   Marc Sklar (10:45) Wow. No.   Michelle (10:58) as long as you're ovulating, there is a possibility that you can get pregnant.   Marc Sklar (11:02) Yeah, yeah, we see this, we do see this all the time. Look, as soon as you hit 35 and 38 and certainly 40 and older, you're going to read things and hear things that say, you can't, it's not possible, you won't, you need donor, you need IVF, whatever it is that you're gonna hear, you're gonna hear it all.   I think the hardest time is when you hear it from the person on the other side of the desk in a white coat that says to you, your only option is donor, just give up. And we all hear variations of those words, whether it's not possible, just use donor, whatever variation of that, of what I just said, when you go into an office, whether that's your OB,   Michelle (11:46) True.   Marc Sklar (12:01) or your REI or whoever it might be, and you're sitting down talking to them and they see your age, they assume certain things and they make certain judgments. And they express those verbally to you. And you hear that and that registers in your brain that embeds into your brain. And you start to believe it. Well, yeah, right.   Michelle (12:22) It's nocebo.   True.   Marc Sklar (12:28) I've never heard it, say it. really like that phrase. Yeah.   Michelle (12:31) You're never going to be able to get it out of your head now. Every time a woman comes in and tells you the story.   Marc Sklar (12:36) Yeah. And so look, they said this to you, it and our our brains are really strong and we imprint with these negative things very easily. It's much harder to imprint with all the positive, it takes more effort. And so it imprints into our brain. And now we start to believe it. Well, Dr. So and so said, it's not possible, I'm not going to do it, I can't. And then we repeat that to ourselves so often that   Michelle (12:49) Right. It's true.   Marc Sklar (13:05) Now our body and our brains believe that to be true. so if someone says something negative to you, you have to work double or triple as hard on yourself to get that out. And you need to express to them, I didn't come here to hear negativity. I didn't come here for you to tell me that I can't. I'm determined to get pregnant.   Michelle (13:09) 100%.   Marc Sklar (13:33) And it's fine if you're not able or willing to help me, I'll go someplace else, but I don't need you to tell me that I can't do it, because I know that I can. And you have to do it in that moment. You have to say that in that moment to them, because what you're saying to them is repeating it back to yourself to retrain yourself and get rid very quickly, get rid of that negative comment so it doesn't embed into your brain, into your conscious.   Michelle (13:52) Yeah.   Marc Sklar (14:00) But it also allows them, they need to be woken up. One, they need to be told this is not okay. And two, you have to have the power and the strength to verbalize that truth to them. Okay. You might not be getting pregnant in the conventional way that you thought or they thought. You might not get pregnant in the way that they would like you to. It doesn't mean that you cannot get pregnant. It means that it might take longer. It might be a different path. It might be...   whatever. And so I think it's really important in those moments to stand up for yourself and verbalize that and let them know they might not like it. It's okay. Yeah, you didn't like what they said to you. So it's fine.   Michelle (14:41) Yeah, exactly.   Totally, totally. And that's like really taking your power back regardless, ultimately it's your journey. You're not there to make the doctor feel better.   Marc Sklar (14:53) Right, listen, I think that's such an important piece. Unlike most other medical visits and specialties, you are a consumer buying their service. Just because they're wearing a white coat and they have MD after their name does not mean   that they get the say in everything. It's your journey, it's your process. You're paying them a lot of money for their service. And even if you have insurance coverage, by the way, it's still insurance coverage that can go someplace else to pay for somebody else. So it doesn't have to go to them. And so...   You have the power, like they make it feel like they have the power and they control the situation. I want you to know you have the power. You control the situation and your outcome. It's your dollars that you're spending. You are and should be an equal participant in this process with them. And they don't have to dictate everything. Now, I'm not saying, you you're telling them the protocols to use all the time, but   It needs to be a joint effort in this process. It's totally different than going into a different medical environment and a different provider for different services. They're not charging you $20,000, those other people, for a service that's elective. So stand up for yourself. Have that empowerment to do so.   Michelle (16:34) Yeah.   Right.   Yeah. And another point that I want to make is, you know, when you're working with a doctor, it doesn't matter how qualified, like, I feel like they should believe in your outcome. If they're doubting your outcome, find another person.   Marc Sklar (16:57) Yeah, right now, 100%, 100%. Look, I am not opposed to donor egg. I think that donor egg is something that is super valuable and has its place. What I don't like is that just because of your age, someone is telling you, need to use donor egg. What they're really saying,   And there is certainly a place for donor egg. have lots of women that I work with that use donor egg very successfully and I'm a big proponent of it. But what, why they are telling you just based on your age to use donor egg is because their success rates are impacted by your age and the challenge, the potential challenge of getting pregnant at your age.   Michelle (17:51) Right.   Marc Sklar (17:55) And so for them and their success rates, they have higher chances with using donor egg and they would just prefer, it's an easier process, they would prefer that you use donor egg for that purpose. Okay, now again, does it mean that it's not the right decision for some? It just means that I think if they're just making that decision based on age, I think there's a lot of other pieces that need to be looked at before that decision is made.   Michelle (18:24) What you just said is so important because it's the reality. Really if the system, it's the reality because their ability to really stay on top of their game is for their statistics to make them look really good. And it's human nature. They're going to be thinking about that when they're talking to you, regardless if they're, you know, they can be great doctors, the two can coexist, but   They're also in a business. So it's important to keep that in mind in the realistic aspect of it is that it's going to make them look better. They don't want to take a risk. They see it as a risk, but that doesn't mean that just because they see it that way, that that's really the case for you.   Marc Sklar (18:54) 100%.   Right, yeah. Look, absolutely. I say this also from, so everyone knows who's listening, 50 % of the couples that I work with, 50 % of them are doing IVF. I could group IUI into that as well, so IUI or IVF, some form of assistive technique. Of that number, about 15%, use donor egg.   Michelle (19:33) Mm-hmm.   Marc Sklar (19:34) So I'm fine with it. I'm happy to support you with it. I just often think that choice is made prematurely or that push in that direction is done prematurely without really giving you a fair chance, really looking at your case as a whole versus just looking at you as an age, as a number.   Michelle (19:56) Same thing with IVF. I also find that with IVF that people will start out maybe three months and they're young and they're like, you know, I just want a baby now. So I'm going to go to IVF. And a lot of people have a preconceived notion just because you're paying a huge amount of money and that there's technology involved that doesn't give a guarantee. in fact, I've seen people get more successful naturally, even at an older age than going through IVF.   Marc Sklar (20:05) Easy.   Well, the success rates for IVF for those who are listening and aren't aware are relatively low. You know, in your, from 30 to 35, those success rates are around 35 to 40 % ish. You know, depending on the clinic, some clinics might have a little higher, some a little bit lower, but roughly, you know, in the United States, that's an accurate statistic. It only goes down as you get older. And if you look, because most clinics,   Michelle (20:50) you   Marc Sklar (20:56) Don't have to report, but most clinics do report their statistics. If you look at statistics for IVF in their 40s without donor egg, those statistics are very, very low. So then you have to ask yourself, is this worth the money or can I get the same or better statistics and results trying naturally by addressing the root issues, by focusing on the things that I need to focus on, by getting healthy.   are those better for me? Are those odds better? One of the beautiful things you mentioned it with, you work with younger women and after three months they move forward with IVF. One of the beautiful things that's happened over the last 20 years is that fertility treatments and the fertility journey has become something that is more accepted and people are more willing to talk about it. And as a result of that,   marketing towards those communities has increased dramatically. And as a result, IBF has been spoken about more frequently because of that marketing. And so it's become so much more commonplace that couples who want to get pregnant, young, try for three months or six months, hey, it's not working.   you know, so and so did IVF and got pregnant or so, you know, we should just go do IVF. And they don't know the real statistics. They believe that it's a hundred percent successful. And as a result, it becomes the first line of treatment versus, you know, what used to be the third or fourth or fifth line of treatment, right? Well, I used to go to my OB and they used to do that. And then I would try other things. Now it's like, I'm not pregnant. Let's just go do IVF. Right. And so so many couples end up doing IVF.   thinking it's faster or more convenient without really working on themselves. And in turn, then they realized later on, I really shouldn't have started this way because it's not a guarantee. I haven't been successful. So they go there very prematurely. My preference would be is to see couples have patience. Take a step back. What's not working for me?   Michelle (23:03) Mm-hmm.   Yeah.   Marc Sklar (23:17) What do I need to improve and correct? And let's work on the root issues so that way you can be successful moving forward. And I had a conversation two weeks ago with a woman. I talked about it briefly this week on my Instagram stories because I think we were both frustrated with each other during this conversation. She has a history of repeated chemical pregnancies.   And she is frustrated with the lack of results and I've just started working with her. And so I asked her, know, she, and as we just started working together, she had another chemical and I asked her to stop trying for a little bit. I'm like, you're just having these ongoing chemicals and we're really not able to make progress. I just wrote out this plan for you. I want to give it some opportunity. You know, it's the end of the, it's close to the end of the year.   How about we just take off right now through the end of the year? Let's just take a break. Let's enjoy life and let's work on ourselves. And she felt like she was wasting time and she was feeling, I could feel her as soon as I said it, like getting anxious about like just the time of giving, creating this time to, and she's in her early forties. And she said, you know, I don't think I'm gonna do that. I can't do that. I'm gonna.   Michelle (24:19) Mm-hmm.   Marc Sklar (24:44) I'm going to keep trying because I feel like I'm wasting time. We had this back and forth, this long conversation back and forth. I'm going to totally support her and respect her decision about how she wants to move forward. I just don't agree. Sometimes taking a step back and working on ourselves and creating space is progress towards our ultimate goal. I know that we think that if we're not actively having intercourse and trying to conceive at ovulation every month, that we're wasting time.   Michelle (24:57) Yeah.   yeah.   Marc Sklar (25:15) Well, in a situation like this, we're just spinning our wheels. If all we do is continue to do the same thing every month, expecting a different result, I don't know how that's gonna change. So we need to give ourselves a little bit of opportunity. And she's so worked up about it and anxious about it, she's trying to control every aspect and she's scared. She's making this decision out of fear.   Michelle (25:19) Totally.   Mm-hmm.   Marc Sklar (25:43) So one, the decision's being made out of fear, and two, she's trying to strangle, like, I'm gonna control all of this. It's not, we are typically not successful if we make decisions out of fear, number one, okay? And number two, the more we try to strangle something, the more you strangle it and you don't allow it to be successful.   We need to create some space, some room for things to occur. Okay? And I'm a big proponent of this, like, let's just take a step back. Let's take a deep breath. Let's understand, let's give ourselves some space and not have to be so stressed about this. Most things, if you think about it, are created in space, in a little bit of a vacuum. Sorry, not a vacuum, in a little bit of a space. If we have this vacuum, we're constantly trying to control it. There's no space for creation.   Michelle (26:19) Yep. Yeah.   Marc Sklar (26:39) There's no place for an opportunity for something to be created in. So I think it's, know, painting a beautiful painting is created from a blank canvas. It's created from space. And the same thing with our life. We need to create an opportunity for life to be created. And so that means not straining, not holding on so tight, not trying to control every little thing.   Michelle (26:52) Mm-hmm. Yep.   Marc Sklar (27:08) Let's take a step back. I'm not saying you don't like do the right things. I'm saying we don't try to control all of those things so closely. And I think this is really such an important lesson for all of us because our tendency when we're told is I'm gonna do it differently. I'm gonna add this in like, right? And you're just like more and more and more more and more. So that's like this stranglehold that happens.   Michelle (27:29) Mm-hmm.   Marc Sklar (27:35) And I want us all to just let go a little bit more. It doesn't mean you're giving up. It doesn't mean you're taking a break. It doesn't have to be. It means you're just not holding on so tight to the outcome and the process. And I think this is so, so valuable for us. Difficult to do. I'm not saying it's easy, but it's so valuable. you know, I know her and I, were both...   kind of frustrated by the conversation because it didn't feel like she was listening to me and she didn't feel like she wanted to move on with my recommendations. She felt frustrated by me asking her to take a break. But I say it out of all love, like that is what I feel like is going to be the most beneficial for her in that situation. And I've had these conversations with others in the past and I'm just saying this from experience. So for all of you listening, sometimes we just gotta let go a little bit.   We've got to just ease up just a little bit.   Michelle (28:31) love this.   Yeah, no, I love this so much. you have no idea. Cause it, think that like you just said, you've had so much experience, you've seen this. And when you do something over and over again for many years, what happens is you start to get a feeling for it. You know, my husband works in the ER. He's starting to have a feel. He gets a sense when somebody's really sick or somebody saying they're sick, you start to get a sixth sense. You know, maybe we can't measure that, but it's a real thing. And I love that you talk about that. Cause to me that's   Marc Sklar (28:37) Yeah.   Michelle (29:04) being in a state of flow, being in a state of flow is the same exact thing that happens in our body when our chi flows and our vitality is able to feed all of our organs. cannot happen when it's constricted. And then going inward. Yeah, that's just going into the yin. You can't be constantly yang. You have to go back into the yin as well. And yin is incredibly productive.   Marc Sklar (29:25) Yeah.   Michelle (29:28) Like what happens when we're sleeping? We're in a state of yin. It's the most productive thing your body can do. You can't possibly have so much going on without that kind of like inert state. know, so it's, yeah, it's totally important, but also I don't know if you ever follow Dr. Joe Dispenza. I'm obsessed with his teachings. And have you ever done his meditations? So his meditations, he actually takes you through a form of induction, which   Marc Sklar (29:48) Mm-hmm. Yeah.   No.   Michelle (29:58) It's not hypnosis, but he gets you into a state of space, of becoming aware of space. Because when you become aware of space and everything that he does is based on science. actually has a whole research team on this. And this idea of kind of allowing this state of space, as they learn in quantum physics, you know, getting to this place where we're not locked in to the material world. We're not locked in.   We're kind of like moving back so we can allow this divine intelligence to take over. And then, and then it fixes things. It takes care of your body. does what it needs to do. Cause that's not our job. Our job is yet to direct and to intend, but our job is not to fix every single thing. When we try to do that, all we're doing is getting in the way of this divine intelligence. So I love that you're saying this because it totally like, it totally speaks the language that I'm feeling when it comes to.   fertility health and overall health like every way really.   Marc Sklar (31:00) Yeah, I agree. it's something I talk about. I have to do it, I feel like, repeatedly to the same person to get them to hear the message. And it's not intuitive. Like, personality-wise and for many of us, our goal is like, just want to fix it. I want to solve it. I want to do it. That creates this stranglehold. And so it's not intuitive for them to kind   Michelle (31:08) Yeah, because it's not common knowledge. It's not common.   Mm-hmm.   Marc Sklar (31:30) pull back a little bit and feel like that's moving forward. But it is.   Michelle (31:34) Yeah. Yeah, totally. Cause I mean, we have, we're conditioned to, you know, to first of all, get quick fixes. I mean, this is, we've been conditioned for years and this is all marketing for quick fixes, like quicker, faster, better, you know, and we also are conditioned to no pain, no gain. You know, you have to work for it. You have to get it. You have to be on top and   Marc Sklar (31:46) Mm-hmm.   Michelle (31:59) So over time, this is just a habit. That's going to be our knee jerk reaction or response to pretty much anything, but it's not necessarily the response your body needs.   Marc Sklar (32:10) Yeah, no, absolutely. And it's actually with the younger generation, that's only getting worse. Maybe not the no pain, no gain part, but the quick fix. That's our generation. Yeah. The younger generation is like, I don't want any pain, but I want all the gain. Yeah. And the quick fix, you know, part of it is because of the phone.   Michelle (32:20) Yeah, that might be more our generation. This is true. It's true. Yeah. I just want to be on my phone.   Dopamine.   Marc Sklar (32:39) the dopamine, but also like this, as much as Amazon has been a great service to so many people, it's a huge disservice. We, and especially the younger generation, expect everything now in a day. Right? That's the quick fix. That's like immediate gratification. Free delivery, two days. Now everyone expects free delivery and they want it there in two days. And it doesn't work like,   Michelle (32:55) Mm-hmm. Yeah.   Marc Sklar (33:09) The world doesn't typically work that way, but they've preconditioned us to this. And that's to our detriment, right? Because that gets translated across the board to all aspects of our life. Now we want things faster. We more immediate gratification. it should have been fixed. Why didn't they get back to me, right? Like all of these things, I think that's a problem. Yeah.   Michelle (33:32) I'm like, we're on the same page. 100%. Yeah. And I think that, yeah, it just, these are mental patterns that we're constantly repeating. And I'll be honest. I mean, ever since I had my phone, I just don't feel as sharp. I don't remember as much. My attention can't stay on one thing. And even me, I'm aware of this and it's impacting me.   Marc Sklar (33:41) Mm-hmm.   Right, yeah, yeah, yeah. One of my favorite things to do both to bother my children and because it's beneficial to them is if we need to order something from Amazon, I put it on the longest shipping option as possible. Like if it says one week or two weeks, that's what I pick. Every time. I mean, unless I like immediately need something, whatever. But like.   Michelle (34:08) that's smart.   That's actually really smart.   You need it. You'll use it when you need it.   Marc Sklar (34:18) Yeah, but like in general, I use the longer shipping option because I'm trying to retrain their minds to be like, it's not here yet. Okay, we'll come. It's not, it's not the end of the world, right? It will arrive. and usually Amazon gives you a little benefit for that delay, by the way. Yeah.   Michelle (34:36) Yeah, yeah, yeah, right. It's a little cheaper. That's really smart. That is actually really, really smart. And then you can put things in one box. So it also is good for the environment. So when it comes to recurrent pregnancy loss, because you'd mentioned you're talking about chemical pregnancies and what are some of the common factors that you've seen clinically?   Marc Sklar (34:46) Yeah, and good for the environment.   Yeah.   Yeah, so chemical pregnancy could be a little bit different, but if we're talking about, you know, reoccurring pregnancy laws or, you know, multiple miscarriages, then the, there are four buckets that I put things into. The first bucket is one we have to look at and analyze, but one we potentially can't do much about, which is genetics, right? Is there some sort of genetic abnormality that's occurring potentially?   Michelle (35:24) Mm-hmm.   Marc Sklar (35:30) due to my genetics or the combination of mine with my partners and what's that going on. I might end up with five causes actually now that I think about it. The next one is autoimmune issues. I find this is a huge reason for reoccurring pregnancy loss. will say also I find this is a big reason for secondary fertility issues.   Michelle (35:41) Hey, good.   Marc Sklar (35:59) with recurrent pregnancy loss. So secondary meaning you've been successful with the pregnancy one time or multiple times, and then at some point you're trying again and you're not successful, but in this case you've had, let's just say a loss. And so I would say I find that autoimmune issues are much more common in that situation because something happened in one of the previous pregnancies or postpartum that caused some sort of autoimmune issue that has triggered this outcome or contributed to this outcome.   Michelle (36:26) Mm-hmm.   Marc Sklar (36:28) Another one is blood clotting factors, that there is some sort of, you know, some issue, whether that's genetic or not, because it doesn't have to be genetic, that is contributing to more clotting factors that doesn't allow for that embryo to implant properly, and you could have a miscarriage. So that's three. Four, uterine issues.   That could be wide, that could be like a bigger bubble that doesn't get talked about as frequently. So what's going on in implantation that might be contributing to that? Is there an infection, a virus, a bacteria? Is there inflammation? Is there endometriosis? What is going on inside the uterine cavity and with the endometrium that could be causing this pregnancy or multiple pregnancies to not be able to be held?   And then the last one, which is male factor. So 50 % of all miscarriages are male factor related. Most typically in those, it's going to be some sort of DNA fragmentation issue. So the DNA of the sperm has been compromised in some way and that's contributing to that loss. That's the one that unfortunately we don't talk about as much because, like why would a male...   Michelle (37:43) Mm-hmm.   Marc Sklar (37:57) contribute to the miscarriage, you know, and they're not carrying. So that one gets ignored, but something that needs to be ruled out. So those are the, I said four, but really five, those are the five reasons that, you we should look at.   Michelle (38:10) Yeah, for sure. And also the microbiome, know vaginal microbiome can impact a lot.   Marc Sklar (38:14) Yeah, so that I look at that in that fourth one with the uterine environment. So to me, that microbiome is a piece that I look at when I'm evaluating that. Yeah.   Michelle (38:23) Yeah. And I feel like, I feel like they should always look at that, like before transfers. mean, cause people are paying so much money. And I know in Spain, it's more commonplace for them to give vaginal, suppositories for, probiotics. And I feel like it would really be very helpful for a lot of people.   Marc Sklar (38:33) Yep.   Great.   Yeah, I've started running that test much more frequently in the last year. And I can't say I run it for everybody because at some point I'm just balancing cost of things, right? Like we could run every test under the sun. It's just like, it's a matter of cost. But certainly if I see implantation failure, if I see chemical pregnancies, you know, these are the sorts of things that for sure I'll start to look at.   Michelle (38:48) Yeah.   Mm-hmm.   Yeah.   Chris. Yeah.   Yeah, for sure. I mean, we could talk for hours, I love that we talked about, first of all, it's really interesting just to get your take on things and to hear from another person who's doing the same thing, But also, you know, I love the fact that you were talking about the energetics of it, because I think that when you do this long enough, you start to see patterns and you could start to see how emotions can really constrict the chi, you know, from our perspective.   Marc Sklar (39:38) Yeah, sure.   Michelle (39:39) So I think that that is really important because yes, we could look at all the little details and the numbers and the stats, but the energetics aspect, we can get so kind of like focused on the small parts. And then sometimes it's good to kind of go zoom back and see the bigger picture. So I thought what you said about that to me was very, very powerful.   Marc Sklar (40:01) Yeah, all of these things, like everything we talked about today is so valuable for those individuals who need that specific message, right? Like we're all in a different place and we all have our own journey, but hopefully, you know, the messages we shared today and the information we shared today really resonated with those who are listening.   Michelle (40:10) Yeah.   I'm sure they did for sure. mean, was a really valuable information. So it's been great having you back, Dr. Sklar. It's been too long and we should do this every so often because I feel like we're never going to really run out of things to talk about. Thank you so much for coming on.   Marc Sklar (40:34) I agree. I'm happy to be on any time. Yeah,   Yeah, I appreciate it and wishing everyone success on their journeys.

The IVF Journey with Dr Michael Chapman
436. Understanding Recurrent Miscarriage: Causes, Statistics and Modern Solution

The IVF Journey with Dr Michael Chapman

Play Episode Listen Later Dec 10, 2024 6:52


In this insightful episode, Prof. Chapman discusses the complexities of recurrent miscarriage, addressing common causes, misconceptions, and the role of genetic abnormalities. He highlights the importance of compassionate care and the psychological toll of multiple pregnancy losses. Prof. Chapman also shares evidence-based insights into treatment options and the supportive care that can improve outcomes, emphasizing hope and the potential for successful pregnancies even after multiple miscarriages. Explore the 'Prof. Michael Chapman - The IVF Journey' Facebook Page, your reliable destination for cutting-edge insights and guidance within the realm of In Vitro Fertilization (IVF). Don't miss out on the IVF Journey podcast; stay informed with the latest episode updates. Tune in for expert discussions and valuable information on navigating the intricate path of IVF.

SAGE Orthopaedics
AJSM November 2024 Podcast: The Influence of Kinesiophobia and Pain Catastrophizing on Disease-Specific Quality of Life in Patients With Recurrent Patellofemoral Instability

SAGE Orthopaedics

Play Episode Listen Later Dec 2, 2024 26:16


The Banff Patellofemoral Instability Instrument 2.0 (BPII 2.0) is a disease-specific, quality of life patient-reported outcome measure (PROM) that is valid and reliable in patients with recurrent lateral patellofemoral instability (LPI). Quality of life encompasses the physical, emotional, and psychological aspects of patient functioning and recovery.   In conclusion, a statistically significant correlation was evident between the BPII 2.0 and the other PROMs. The BPII 2.0 does not explicitly measure kinesiophobia or pain catastrophizing; however, the significant statistical relationship of the TSK-11 and PCS to the BPII 2.0 suggests that this information is being captured and reflected.   Click here to read the article.

Viva Learning Podcasts | DentalTalk™
Ep. 622 - Composite Restorative Designed to Fight Recurrent Decay and Post-Op Sensitivity

Viva Learning Podcasts | DentalTalk™

Play Episode Listen Later Dec 1, 2024 23:00


To help manage post-op sensitivity and recurrent decay in composite restorative dentistry, we do have options in our selection of composite resin materials. Is your composite material bioactive? Does it actively participate in the oral ecosystem, helping to reduce post-op sensitivity and acid producing microbes at the tooth-composite interface? To tell us more about it is our guest, Dr Neville Hatfield. Dr. Hatfield graduated Magna Cum Laude from Boston University School of Dental Medicine. He completed a GPR at the Manhattan Veterans Affairs Hospital, where he provided comprehensive prosthetic and surgical treatment to military veterans. He currently practices in New Jersey.

NetWorth Radio
NetWorth Radio's Texas Global Business Leadership Series: Spencer McGowan Interviews Brad Olsen and Oliver Doolin from Recurrent Advisors in Houston and The Future of Energy Infrastructure!

NetWorth Radio

Play Episode Listen Later Nov 26, 2024 12:31


JACC Podcast
Prevalence and Impact of Recurrent Rejection on Pediatric Heart Transplant Recipients: A PHTS Multi-Institutional Analysis

JACC Podcast

Play Episode Listen Later Nov 18, 2024 9:47


In this episode, a study on pediatric heart transplant recipients highlights the decreasing prevalence of recurrent rejection, yet finds that children who experience multiple rejection episodes face a significantly higher risk of graft loss. Notably, racial disparities were observed, with Black children showing poorer outcomes, suggesting the need for standardized care protocols and a focus on eliminating these inequities in future heart transplant practices.

Inform Performance
Athletic Shoulder: Dr Margie Olds - Managing Recurrent Shoulder Instability

Inform Performance

Play Episode Listen Later Nov 12, 2024 57:26


Episode 161: In this episode of the Athletic Shoulder podcast, Ben Ashworth interviews Dr Margie Olds, a leading physiotherapist specialising in shoulder instability. Margie has extensive international experience, she has worked as a physiotherapist in New Zealand, the USA, and the UK and she was the lead therapist for British Canoeing and attended the Athens Olympic Games in 2004. - Topics discussed Predicting recurrent shoulder instability Shoulder instability risk assessment tools. Conservative vs. surgical management for first-time traumatic anterior shoulder dislocations. The significance of kinesophobia during athlete recovery. Assessing shoulder readiness & confidence - Where you can find Margie: Linkedin Twitter Website - Sponsors VALD: makers of the Nordbord, Forceframe, ForeDecks and HumanTrak. VALD Performance systems are built with the high-performance practitioner in mind, translating traditionally lab-based technologies into engaging, quick, easy-to-use tools for daily testing, monitoring and training. Hytro: The world's leading Blood Flow Restriction (BFR) wearable, designed to accelerate recovery and maximise athletic potential using Hytro BFR for Professional Sport. -    Where to Find Us   Keep up to date with everything that is going on with the podcast by following Inform Performance on:   Instagram Twitter Our Website -   Our Team   Andy McDonald Ben Ashworth Alistair McKenzie Dylan Carmody Steve Barrett  Pete McKnight  

Sisters in Loss Podcast: Miscarriage, Pregnancy Loss, & Infertility Stories
369 - Recurrent Miscarriages and Male Factory Infertility with Tamika Henderson

Sisters in Loss Podcast: Miscarriage, Pregnancy Loss, & Infertility Stories

Play Episode Listen Later Nov 6, 2024 23:55


Have you heard of male factor infertility? Male Factor infertility can be caused due to low sperm production, abnormal sperm function or blockages that prevent the delivery of sperm. Illnesses, injuries, chronic health problems, lifestyle choices, and other factors that lead to infertility. Today's guest shares her recurrent miscarriage story and journey where her husband is experiencing secondary infertility by way of male factor. Tamika Henderson had miscarriages in 2016, 2017, and 2018, and it became so routine she knew she was miscarrying. After consulting with a fertility doctor she found out her fertility was due to male factor which was shocking because her husband already had two children. In this episode she takes us back on that journey to healing, and her next steps in her fertility treatments. This episode is for you to listen to if you have never heard of male factor fertility and how it effects a marriage. Become a Sisters in Loss Birth Bereavement, and Postpartum Doula Here Living Water Doula Services Book Recommendations and Links Below You can shop my Amazon Store for the Book Recommendations You can follow Sisters in Loss on Social Join our Healing Collective Online Support Group Join the Sisters in Loss Online Community Sisters in Loss TV Youtube Channel Sisters in Loss Instagram Sisters in Loss Facebook Sisters in Loss Twitter You can follow Erica on Social Erica's Website Erica's Instagram Erica's Facebook Erica's Twitter

Behind The Knife: The Surgery Podcast
Journal Review in Colorectal Surgery: Diverticulitis

Behind The Knife: The Surgery Podcast

Play Episode Listen Later Nov 4, 2024 32:08


You have a patient with another episode of acute uncomplicated diverticulitis. This is the third episode. Do they need antibiotics? Is surgery the next step? What is their risk of recurrence with or without surgery? Tune in to find out! Join Drs. Peter Marcello, Jonathan Abelson, Tess Aulet and special guest Dr. Jason Hall MD, MPH as they discuss high yield papers discussing diverticulitis.  Learning Objectives: 1. Describe the impact on quality of life for patients who undergo surgery or non-operative management of diverticulitis 2. Discuss the indications for surgery in patients with diverticulitis 3. Describe ongoing clinical trials in management of diverticulitis  References: Santos A, Mentula P, Pinta T, et al. Quality-of-Life and Recurrence Outcomes Following Laparoscopic Elective Sigmoid Resection vs Conservative Treatment Following Diverticulitis: Prespecified 2-Year Analysis of the LASER Randomized Clinical Trial. JAMA Surg. 2023;158(6):593–601. doi:10.1001/jamasurg.2023.0466 https://pubmed.ncbi.nlm.nih.gov/37074706/ Bolkenstein HE, Consten ECJ, van der Palen J, van de Wall BJM, Broeders IAMJ, Bemelman WA, Lange JF, Boermeester MA, Draaisma WA; Dutch Diverticular Disease (3D) Collaborative Study Group. Long-term Outcome of Surgery Versus Conservative Management for Recurrent and Ongoing Complaints After an Episode of Diverticulitis: 5-year Follow-up Results of a Multicenter Randomized Controlled Trial (DIRECT-Trial). Ann Surg. 2019 Apr;269(4):612-620. doi: 10.1097/SLA.0000000000003033. PMID: 30247329. https://pubmed.ncbi.nlm.nih.gov/30247329/ Hall J, Hardiman K, Lee S, Lightner A, Stocchi L, Paquette IM, Steele SR, Feingold DL; Prepared on behalf of the Clinical Practice Guidelines Committee of the American Society of Colon and Rectal Surgeons. The American Society of Colon and Rectal Surgeons Clinical Practice Guidelines for the Treatment of Left-Sided Colonic Diverticulitis. Dis Colon Rectum. 2020 Jun;63(6):728-747. doi: 10.1097/DCR.0000000000001679. PMID: 32384404. https://pubmed.ncbi.nlm.nih.gov/32384404/ Hall JF, Roberts PL, Ricciardi R, Read T, Scheirey C, Wald C, Marcello PW, Schoetz DJ. Long-term follow-up after an initial episode of diverticulitis: what are the predictors of recurrence? Dis Colon Rectum. 2011 Mar;54(3):283-8. doi: 10.1007/DCR.0b013e3182028576. PMID: 21304297. https://pubmed.ncbi.nlm.nih.gov/21304297/ Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more.   If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen

Finding Fertility
Finding the Root Cause of Recurrent Miscarriages - Increasing Your Chances of a Health Pregnancy

Finding Fertility

Play Episode Listen Later Oct 18, 2024 10:11


Finding the Root Cause of Recurrent Miscarriages – Increasing Your Chances of a Health Pregnancy  "Your fertility struggles might not be about what's wrong with your reproductive system, but what's happening in your gut and adrenals." Topics Discussed:

The Worst Girl Gang Ever
S8 E34 | Resilience Through Recurrent Loss: the Requirement for Self-Advocacy, Holistic Approaches, and Finding Joy

The Worst Girl Gang Ever

Play Episode Listen Later Sep 30, 2024 56:56


This episode features an in-depth discussion with community member Eleanor, who bravely shares her heartbreaking journey with recurrent pregnancy loss. She opens up about her first relatively straightforward pregnancy in 2019, followed by a series of devastating miscarriages over the next few years. She describes the emotional toll of each loss, the frustration with the medical system, and her determination to find answers through holistic approaches. Eleanor shares her most recent steps, including undergoing IVF and working closely with a fertility acupuncturist and naturopath to address potential underlying issues. She also discusses the value of a holistic approach, the limitations of the traditional medical system, and the power of community in healing. In this episode, you'll discover: The emotional and deeply personal toll of recurrent pregnancy loss Eleanor's frustration with the medical system and the need for more comprehensive, holistic support The importance of self-care, finding joy, and taking breaks during the fertility journey An exploration of the available holistic approaches, including acupuncture, nutrition, and immune testing The Warriorship is a membership to help you navigate life after baby loss. It covers every stage of the recovery pathway, and provides support, advice, and a range of emotional tools to help you through this difficult time. This is more than a support group. For more information and to join The Warriorship go to: https://theworstgirlgangever.co.uk/warriorship/ The Worst Girl Gang Ever is a real, honest emotive podcast that covers the heartbreaking subject of miscarriage, infertility and baby loss, expect honest conversations about unspoken experiences. Hosted by TWGGE founders Bex Gunn and Laura Buckingham, this show is a chance to break the silence and really open up the dialogue around the topic of miscarriage and pregnancy loss. No more shame, no more taboo - let's ditch that for our children; the ones that will come, the ones that are and the ones that never came to be. Learn more about your ad choices. Visit megaphone.fm/adchoices

Optimal Living Daily
3233: How to Understand When You Love Someone with Recurrent Depression by Dr. Margaret Rutherford

Optimal Living Daily

Play Episode Listen Later Jun 30, 2024 10:04


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 3233: Dr. Margaret Rutherford offers a heartfelt exploration of loving someone with recurrent depression, sharing Patricia's story to illustrate the complexities and challenges involved. By developing empathy and understanding, loved ones can better support and help manage this difficult condition. Read along with the original article(s) here: https://drmargaretrutherford.com/how-to-understand-when-you-love-someone-with-recurrent-depression/ Quotes to ponder: "Every now and then, the powers that be dig a hole in the floor somewhere, a hole just big enough for you to fall through." "Patricia's eyes filled with tears. Dan looked at her. 'I'm sorry. Now I get it. I'd be paralyzed.'" "You can help by not judging and by giving them the respectful message that you know they're trying." Learn more about your ad choices. Visit megaphone.fm/adchoices