POPULARITY
Many of us have plenty of experiences with math, especially when we were younger. Perhaps some of your memories of what math was like for you in school are unpleasant, to say the least. Yet there are many people who are passionate about mathematics, especially Christians who see mathematics as the God-given language by which we can better understand not only the physical world around us, but God Himself. Naturalism has no real answers for why mathematics is so useful and even beautiful and practical not only for doing science, but in our everyday lives. And how are beauty and mathematics linked? What do beauty and math tell us about God Himself? This week we wrap up our conversation with youth leader, math professor, friend of Watchman Fellowship and Christian apologist Paige Lehrmann. Paige will share with us her passion about mathematics, beauty, and how we can incorporate them in our defense for the hope that is in us (1 Peter 3:15). Paige Lehrmann is the Director of Student Ministries at St. Andrew's Community Church in Oklahoma City and a student at Dallas Theological Seminary. She earned her B.A. in Philosophy and Mathematics from Oklahoma Baptist University, where she completed an interdisciplinary thesis on the Trinity. Paige went on to pursue graduate studies in mathematics at the University of Oklahoma and has taught as an adjunct professor at Mid-America Christian University. She has presented at apologetics conferences on topics such as the Trinity, the divinity of Christ, and theistic arguments from beauty. Through her work, she hopes to help others think deeply about faith, truth, and the beauty of the Gospel.You may contact Paige via email at Paige.lehrmann@gmail.com. Free Resources from Watchman Fellowship Atheist New Testament scholar Dr. Bart D. Ehrman: www.watchman.org/Ehrman Atheism: www.watchman.org/Atheism Latter-day Saints: www.watchman.org/Mormonism Panpsychism: https://www.watchman.org/files/ProfilePanpsychism.pdf The New Age Movement: https://www.watchman.org/profiles/pdf/newageprofile.pdf Hinduism: https://www.watchman.org/staff/jwalker/ProfileHinduism.pdf Additional ResourcesFREE: We are also offering a subscription to our 4-page bimonthly Profiles here: www.watchman.org/FreePROFILE NOTEBOOK: Order the complete collection of Watchman Fellowship Profiles (around 700 pages -- from Astrology to Zen Buddhism) in either printed or PDF formats here: www.watchman.org/NotebookSUPPORT: Help us create more content like this. Make a tax-deductible donation here: www.watchman.org/GiveApologetics Profile is a ministry of Watchman Fellowship For more information, visit www.watchman.org © 2025 Watchman Fellowship, Inc.
Host: David Rosenblum, MD Guest: Phillip Kim, MD Date: January 24, 2025 Time: 6:30 AM Episode Summary: In this episode of the PainExam Podcast, Dr. David Rosenblum engages with Dr. Phillip Kim to discuss the Federation Pain Care Access, a newly formed organization advocating for improved access to interventional pain treatments. The episode delves into the challenges posed by restrictive insurance coverage policies and the collaborative efforts needed to address these issues effectively. Key Discussion Points: -Introduction to Federation Pain Care Access: A new entity focused on advocating for emergent and standard care in interventional pain treatments, aiming to enhance access through advocacy and legislative solutions. - Impact of Restrictive Policies: Dr. Kim highlights how insurance carriers like Evicor, AIM, and Optum impose restrictive coverage policies that harm patients and practitioners, particularly amid the ongoing opioid epidemic. AIM, Optum and Evicore are not insurance carriers. these are separate entities which oversee utilization management and prior auth requests for insurance carriers (HMO, TPA's etc) e g. BCBS plans, UHS etc. Prior Authorization Challenges: Discussion on the AMA 2022 Prior Authorization Physician Survey, which indicates significant negative impacts on patient care due to prior authorization processes. - Case Studies: Dr. Kim shares specific cases where patients faced harm due to denied claims, including issues related to medical cannabis and necessary medical equipment. - Collaboration with Medical Societies: The Federation works alongside various pain societies and stakeholders to address common concerns and push for better coverage policies. - Future Goals Plans for meetings with CMS and Medicare Administrative Contractors (MACs) regarding specific treatments like SI joint radiofrequency ablation, aiming to improve coverage and access. Fundraising and Outreach: The Federation seeks to grow its membership and funding through outreach to allied health professionals and patient care groups while launching a media campaign to raise awareness of patient struggles Legal and Advocacy Efforts: Emphasis on the need for legal considerations in advocacy efforts and the importance of public support in achieving the Federation's goals. - The No Pain Act: Discussion on recent legislation aimed at expanding access to non-opioid treatments and alternatives for chronic pain management. Guest Bio: Phillip Kim, MD is a leading advocate for pain care access and a founding member of the Federation Pain Care Access. He brings extensive experience in managing chronic pain patients and navigating healthcare policies. Resources Federation Pain Care Access Website: https://www.painfed.org # board Listeners are encouraged to support the Federation Pain Care Access by visiting their website to learn more about their initiatives and consider contributing to help advance their mission. Join Dr. Rosenblum and Dr. Kim in this vital conversation about the ongoing efforts to improve pain care access and the importance of collaboration in overcoming the challenges faced by patients and healthcare providers. Long island based anesthesiologist, David Rosenblum, MD, is one of the first interventional pain physicians in the country to integrate ultrasound guidance into his pain practice. Since 2007, he has been an international leader in the treatment of chronic pain. He has helped countless of patients suffering from back, neck, knee, shoulder, hip joint pain and has been at the forefront of regenerative pain medicine, minimally invasive pain therapies and medical education. Patients can schedule a consultation by going to www.AABPpain.com or calling: Brooklyn Office 718 436 7246 Garden City Office 516 482 7246
This week's EYE ON NPI is as ethereal as it is magical: it's Bel Fuse's 1xN port MagJack and specialty ICMs (https://www.digikey.com/en/product-highlight/b/bel-fuse/1xn-port-magjack-and-specialty-icms). These are specially made Ethernet and Ethernet-USB combo jacks that have magnetic transformers inside to make integration with your Ethernet PHY (https://en.wikipedia.org/wiki/Ethernet_physical_layer) in order to communicate on the network. MagJacks make designs smaller, and less noisy - they're a great way to simplify your next Ethernet design and get it to market faster! Wireless this, 5G that - what we sometimes need are WIRES! Wired networking is much reliable than wireless, and can go far distances with no loss of signal strength. Particularly as you can also put power over the same wires for nodes that need no other cabling, Ethernet is a reliable networking standard - don't discount it just because of its age! One nice benefit of it is you don't have to do SSID/password setup, it's truly plug and play. Three things are required to add Ethernet. First is a microcontroller or microcomputer that has built in Ethernet Medium Access Control (https://en.wikipedia.org/wiki/Medium_access_control), the low level packet forming technology. Some chips have this built in, such as the ESP32 (https://www.digikey.com/short/dz5pv22m) - or you can use a companion chip like the WIZ5xxx series (https://www.digikey.com/en/supplier-centers/wiznet) that can be controlled over SPI. Then, to get onto a network, you'll want the ubiquitous mechanical RJ-45 connector (https://www.digikey.com/short/t28834zr) that will lead to Cat-5 or Cat-6 cable (https://www.digikey.com/short/pnjh3t8d). In between, the signal levels need to be isolated and converted to the +-2.5V differential signal. To do that we need what is colloquially referred to as the 'magnetics': a cluster of transformers and chokes that will make the signal differential, isolate the PHY from the outside world and also reduce the risk of outside spikes and shocks. Both the Wiznet and ESP32 datasheets, for example, have example wiring to help you identify the right configuration. Note that not all chips have the same magnetics impedances / configurations: it depends on the output signal and impedance. Second, this is separate than PoE magnetics (https://www.adafruit.com/product/3847) which are separate from the data transfer. If you don't care about optimizing board size and complexity, you can always use external magnetics with a plain jack. Bel has a full selection of dozens of magnetics for any configuration you may need (https://www.belfuse.com/product-detail/icm-s-discrete-lan-magnetics). For example the Seeed Ethernet shield (https://www.digikey.com/short/70cvntbm) uses this technique because the PCBA is so big they have space to spare. However, when you want to keep your board compact, you can upgrade your design to use one of Bel Fuse's 1xN port MagJacks. (https://www.digikey.com/en/product-highlight/b/bel-fuse/1xn-port-magjack-and-specialty-icms) MagJacks provide two big benefits (https://www.digikey.com/en/product-highlight/b/bel-fuse/1xn-port-magjack-and-specialty-icms): one they're smaller than separate magnetics/jacks and second, the magnetics get enclosed in the metal shell of the jack which provides some EMI shielding. For example, we used a combo-jack on the Ethernet Featherwing (https://www.digikey.com/short/9w49r80j) to keep the design single-sided. Which is why we were excited to see the Bel Fuse MagJacks pop up on https://www.digikey.com/new - they're a trusted component we've used before. For this week's EYE ON NPI, DigiKey is highlighting a selection of the new Bel Fuse MagJacks, with dozens of options available (https://www.digikey.com/en/product-highlight/b/bel-fuse/1xn-port-magjack-and-specialty-icms). There's classic horizontal ones with LEDs (https://www.digikey.com/en/products/detail/bel-fuse-inc/P01-0002-01/25588398). Vertical ones! (https://www.digikey.com/en/products/detail/bel-fuse-inc/P01-1AF2-01/25588382) Countersunken for low clearances (https://www.digikey.com/short/5b9mb454) As well as some nifty combo-units that contain both USB type A and Ethernet. (https://www.digikey.com/en/products/detail/bel-fuse-inc/P01-3CG3-01/25588395) Just make sure that the internal magnetics match your chipset's needs before selecting it for integration. DigiKey is in the process of stocking all the varieties, but if you want to get started, the P01-1AA2-01 (https://www.digikey.com/short/pw02p9m9) is in stock now for immediate delivery. Order today and you can get this part in your hands by tomorrow morning to help optimize your next Ethernet design!
Explore Birdwatching Innovations at CES Show 2025 with Netvue's Birdfy Tech Birdfy.com About the Guest(s): Estelle Yang is the PR Manager at Netvue Technologies Company, LTD. She represents a team that is pioneering innovative technology solutions in the field of birdwatching. Specializing in smart bird feeder products under their sub-brand PHY, she is actively involved in leveraging cutting-edge camera technologies to bring unique and enriching experiences to birdwatchers worldwide. Estelle's contributions help Netvue Technologies excel in creating smart devices that capture cherished wildlife moments effortlessly. Episode Summary: In this engaging episode of The Chris Voss Show, host Chris Voss is joined by Estelle Yang, PR Manager at Netvue Technologies Company, LTD. As the podcast pioneers an insightful discussion on Netview's latest advancements in birdwatching technology, Estelle elaborates on the smart bird feeder, PHY. This innovation not only captivates bird enthusiasts but offers revolutionary features like bird detection, species identification, and auto-sharing capabilities. The conversation navigates through various product offerings, technological integrations, and future prospects, embodying Netvue's ambition to enrich wildlife appreciation through their smart devices. Expounding on the hallmark features of the PHY smart bird feeder, Estelle highlights Netvue's dedication to a comprehensive birdwatching experience, available in various forms like bird feeders, boxes, and special hummingbird variants. The products are equipped with AI-powered cameras for motion detection and species identification, allowing users to capture and share precious wildlife moments seamlessly. Engaging potential CES attendees, Estelle hints at exciting live demonstrations of dual-camera setups and slow-motion captures, redefining digital birdwatching standards. Key Takeaways: Netvue Technologies' trademark innovation, the PHY smart bird feeder, is revolutionizing birdwatching with AI-powered cameras that identify bird species and record entertaining wildlife moments. PHY provides a complete line of products, from smart bird feeders and boxes to hummingbird-focused devices, all designed to integrate seamlessly into any backyard setting. The PHY app enhances user experience by offering real-time notifications and automatic video highlights, maintaining a vivid log of visitors in the user's gardens. Netvue's products integrate sustainable materials and solar power options, reinforcing their commitment to eco-friendly practices and convenience. Estelle explains the proactive steps Netvue undertakes to engage with the bird-watching community, including a dedicated bird fund aimed at fostering nature preservation and education. Notable Quotes: "By leveraging cutting-edge camera technology, bird feed products can detect and identify birds and notify users of their visits." "We actually have committed to allocating $1 from each sale to set up the Birdfeeder Fund, supporting charitable and educational projects." "This Birthy Feeder Dual has the first dual camera bird feeder with three lenses, setting a new standard for birdwatching." "Our products are designed to withstand severe environments, ensuring weatherproof and durable birdwatching experiences." "Being able to watch the bird right up close…we make sure every view is shared in thrilling detail."
Episode Title: Evidence-Based Regenerative Pain Medicine with Guilherme Ferreira Dos Santos, MD CIPS Host: David Rosenblum Guest: Guilherme Ferreira Dos Santos, MD CIPS Episode Overview: In this insightful episode of the PainExam Podcast, Dr. David Rosenblum sits down with Dr. Guilherme Ferreira Dos Santos, a distinguished expert in pain medicine who is well known for his research, educational endeavors and expertise in Regenerative Pain Medicine and Ultrasound-Guided interventions. Together, they delve into the evolving landscape of regenerative pain medicine, focusing on evidence-based practices and the standardization of Platelet-Rich Plasma (PRP) quality. Key Topics Discussed: - Evidence-Based Regenerative Pain Medicine: An exploration of current research and practices that inform effective pain management strategies. - PRP Quality and Standardization: Discussion on the importance of PRP quality in treatment outcomes and the need for standardized protocols. - Ultrasound-Guided Spine Interventions: Insights into the benefits and techniques of ultrasound guidance in performing spinal interventions, including a conversation on avoiding cervical epidurals. - Access to Pain Care: A comparative analysis of the differences in access to pain care across Portugal, Spain, the USA, and Canada, highlighting challenges and opportunities in each region. - Pain Expo Dubai: An overview of the upcoming Pain Expo in Dubai, where both Dr. Rosenblum and Dr. Ferreira Dos Santos will be presenting, sharing their expertise with a global audience. Guest Biography: Dr. Guilherme Ferreira Dos Santos is an Interventional Pain Medicine Specialist and Clinical Scientist with a career spanning Portugal, the United States, Canada, and Spain. He began his journey at the University of Lisbon, earning his Medical Degree in 2014, followed by a five-year residency program in Physical Medicine and Rehabilitation, which he completed in 2020. His fascination with Interventional Pain Medicine led him to the Department of Pain Medicine at Mayo Clinic, where he served as an Invited Clinical Research Scholar in 2018 and 2021 under the mentorship of Dr. Mark Friedrich Hurdle. At Mayo Clinic, he contributed to refining ultrasound-guided techniques for chronic spinal pain. Dr. Ferreira dos Santos further advanced his expertise with a Clinical Fellowship in Chronic Pain Medicine at the University of Toronto in 2022, training under esteemed mentors such as Dr. Anuj Bhatia, Dr. Paul Tumber, and Dr. Philip Peng. In this role, he was instrumental in advancing education on ultrasound-guided techniques nationally and internationally, which deepened his clinical skills and passion for mentorship. Currently based in Barcelona, Dr. Ferreira Dos Santos serves as the Senior Specialist and Responsible Clinical Lead for the Education and Training Excellence Center in Pain Medicine at Hospital Clínic de Barcelona. He is also the Director of the Clinical Fellowship Program in Interventional Pain Medicine. Throughout his career, he has lectured at international conferences in over 25 countries and authored more than 35 peer-reviewed Q1 articles. His contributions have earned him several accolades, including the 2018 Grant for Young Clinical Researcher of the Year in Pain Medicine from the Grünenthal Foundation, the 2020 Gofeld Academic Scholarship Award, and the 2022 Nikolai Bogduk Young Investigator Grant. His journey across four countries has shaped his approach to clinical care, research, and mentorship, fueling his mission to improve pain management globally. Listen to the Episode: Tune in to gain valuable insights from Dr. Ferreira Dos Santos and learn more about the future of pain medicine. Available on all major podcast platforms. Links and Resources: - NRAP Academy - Follow Dr. David Rosenblum on X and LinkedIn - Follow Dr. Guilherme Ferreira Dos Santos on LinkedIn Join the Conversation: We encourage our listeners to reach out with their thoughts and questions! Use the hashtag #PainExamPodcast on social media to engage with us. S ubscribe and Review: If you enjoyed this episode, please subscribe and leave a review on your favorite podcast platform. Your feedback helps us improve and reach more listeners! Next Episode Preview: Stay tuned for our next episode, where we will continue to explore the latest advancements in pain management and treatment options.
Exploring the Efficacy of Autologous Platelet Leukocyte Rich Plasma Injections in Chronic Low Back Pain & Understanding Degenerative Lumbar Spinal Stenosis Host David Rosenblum, MD Episode Date: October 25, 2024 In this episode, Dr. David Rosenblum discusses two significant studies related to chronic low back pain and degenerative lumbar conditions. The first study focuses on the use of autologous platelet leukocyte rich plasma (PLRP) injections for treating atrophied lumbar multifidus muscles, while the second study investigates the correlation between muscle atrophy and the severity of degenerative lumbar spinal stenosis (DLSS). Featured Article 1: - Effect of Autologous Platelet Leukocyte Rich Plasma Injections on Atrophied Lumbar Multifidus Muscle in Low Back Pain Patients with Monosegmental Degenerative Disc Disease - **Authors:** Mohamed Hussein, Tamer Hussein Key Points Discussed 1. Background: Correlation between lumbar multifidus muscle dysfunction and chronic low back pain. 2. Study Overview: 115 patients treated with weekly PLRP injections for six weeks, followed for 24 months. 3. Outcome Measures: Significant improvements in NRS and ODI scores, with high patient satisfaction. 4. Conclusions: PLRP injections into the atrophied multifidus muscle are safe and effective for managing chronic low back pain. Featured Article 2: - Degenerative Lumbar Spinal Stenosis Authors:* Gen Xia, Xueru Li, Yanbing Shang, Bin Fu, Feng Jiang, Huan Liu, Yongdong Qiao Key Points Discussed 1. Background: DLSS is a common condition in older adults, often leading to muscle atrophy and disability. 2. Study Overview: A retrospective analysis involving 232 patients to investigate the correlation between muscle atrophy and spinal stenosis severity. 3. Results: - Significant differences in the ratio of fat-free multifidus muscle cross-sectional area between stenotic and non-stenotic segments. - A strong positive correlation was found between multifidus atrophy and the severity of spinal stenosis. - The atrophy was more pronounced on symptomatic sides of the spine compared to contralateral sides. 4. Conclusions: The findings suggest that more severe spinal stenosis is associated with greater muscle atrophy, emphasizing the importance of addressing muscle health in DLSS patients. Discussion: Dr. Rosenblum provides insights into how these studies inform clinical practices for treating chronic low back pain and managing degenerative conditions. He emphasizes the need for comprehensive treatment strategies that consider both muscle health and spinal integrity which may be achieved via peripheral nerve stimulation of the medial branch nerve and multifidus muscle or PRP injection in to the multifidus muscle. Closing Remarks: Listeners are encouraged to stay informed about innovative treatment options and the importance of muscle assessment in managing spinal disorders. **Follow Us:** - Subscribe to the Painexam Podcast for more episodes discussing the latest in pain management research and treatments. - Connect with us on social media [insert social media links]. NRAP Academy also offers: Board Review Anesthesiology Pain Management Physical Medicine and Rehabilitation Regenerative Medicine Training Live Workshops Online Training The Virtual Pain Fellowship (online training program with discount to live workshops) Regional Anesthesia & Pain Ultrasound Course Private Training Available Email Info@NRAPpain.org **Disclaimer:** The information presented in this podcast is for educational purposes only and should not be considered medical advice. Always consult a healthcare professional for medical concerns. References Xia, G., Li, X., Shang, Y. et al. Correlation between severity of spinal stenosis and multifidus atrophy in degenerative lumbar spinal stenosis. BMC Musculoskelet Disord 22, 536 (2021). https://doi.org/10.1186/s12891-021-04411-5 Hussein M, Hussein T. Effect of autologous platelet leukocyte rich plasma injections on atrophied lumbar multifidus muscle in low back pain patients with monosegmental degenerative disc disease. SICOT J. 2016 Mar 22;2:12. doi: 10.1051/sicotj/2016002. PMID: 27163101; PMCID: PMC4849261.
Send Everyday AI and Jordan a text messageIs OpenAI gonna lose money for 5 more years? Will Tesla be able to use AI to solve transportation problems? Why is Microsoft going all in on AI in healthcare? We discuss this week's AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Google's Role in Advertising and AI2. Tesla's Struggles with AI-powered Vehicles 3. Microsoft's Advances in Healthcare AI4. OpenAI's Financial Outlook and its Role in the AI Industry5. Impact of AI Models on Logic and ReasoningTimestamps:00:00 AI news continues: Tesla innovates, Microsoft focuses healthcare.06:02 Nobel Prize in Physics awarded for AI.09:52 Yahoo search is nearly irrelevant; Google's ad market shifting.12:00 Google Gemini struggles with up-to-date information integration.17:40 Interested in self-driving AI cars' business impact.20:32 Improve ChatGPT use with PPP course access.23:23 Doctors should use transcription for efficiency.26:04 OpenAI reports AI-generated election misinformation rise.28:50 AI spreading misinformation before upcoming US election.34:58 OpenAI achieved stage 2 with reasoning models.38:09 Apple invests heavily but relies on third-party models.42:03 Google, Tesla, Microsoft, OpenAI, Apple AI updates.Keywords:Google's US search ad market share, Google Gemini, Tesla, CyberCab, RoboVan, Waymo, Workplace productivity with self-driving cars, Microsoft AI tools in healthcare, Administrative burdens in healthcare, ChatGPT course, Apple AI research, GSM Symbolic, Meta's Llama, Microsoft's PHY, Google's JEMMA, OpenAI's GPT-4, Tesla stock decline, Tesla missing timelines, AI logical reasoning debate, AI vs human cognition, OpenAI projected loss, Google dominance in online search, DOJ actions against Google, Nobel Prize in Physics, AI misuse, Medical imaging models, Misinformation in US elections, Reasoning models, Fake content targeting US elections, CyberCab skepticism. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Thank you for 1m downloads of the podcast and 2m readers of the Substack!
07/07/24The Healthy Matters PodcastS03_E17 - Beyond Medication - Effective Strategies for Pain ManagementOn a scale of 0-10, where is your pain right now? We've all been asked that question at one point or another, but really - think about where it is right now. Most of us experience some level of pain throughout our daily lives that we manage in one way or another, and oftentimes we assume that the best tools for dealing with pain are medications. But they've got their downsides, too, and more and more, we're finding out that medications are not the only or even the most effective, means of dealing with chronic pain.Pain is individual, so there's not always a one-size-fits-all solution, and on Episode 17 we'll re-shape the conversation around pain with Dr. Catherine Justice, an Integrative Physical Therapist at Hennepin Healthcare. We'll look at how our brains interpret pain, how our thresholds can shift throughout our lives, and explore effective pharmacy-free practices for dealing with pain like breathing techniques, mindfulness, and movement therapies. Did you know that Yoga is the Sanskrit word for union? Get wise with us on this episode of the Healthy Matters Podcast! Here are some resources mentioned in the episode:Hennepin Healthcare Group Medical Visitswww.nopainmn.orgProject EchoSeason 1, Episode #16: Acupuncture (Guest: Licensed Acupuncturist Jess Brown)Season 1, Episode #18: Chiropractic Care (Guest: Dr. Rick Printon)Got a question for the doc or a comment on the show?Keep an eye out for upcoming shows on social media!Email - healthymatters@hcmed.orgCall - 612-873-TALK (8255)Find out more at www.healthymatters.org
In this episode Clementine shares her experience of birthing her son Wilf. Having always known she wanted to be a mother and growing up hearing her own mum talk very positively about birth, she was super excited to find out she was pregnant. During her pregnancy she did lots of birth prep; reading books, taking antenatal classes, joining a hypnobirthing course and seeing a local physiotherapist. Clementine laboured at home for a while before going in to the hospital where she was disappointed to find the midwife lead unit was closed. Luckily there was one available room in the labour ward that had a birth pool and the midwives were really supportive of her preferences, so despite the unexpected change in location she was able to have the waterbirth she had hoped for. Clementine talks a little about her breastfeeding journey so far, including the relief she felt when her lactation consultant suggested pumping and bottle feeding to give her a rest in the early days. Clementine's IG: https://www.instagram.com/goodnessclementinenutrition/ Clementine's website: https://www.goodnessclementinenutrition.com/?r_done=1 My website: www.serenalouth.com My IG: https://www.instagram.com/serenalouth/
Advocating for Transparency and Oversight in Pain Management Introduction: Welcome back to Painexam, where we delve into the latest advancements and challenges in pain management. Today's episode highlights a significant advocacy effort made by leading Interventional Pain Physicians and industry experts. Summary of Lobbying Effort: On March 20, 2024, a group of widely known and respected pain physicians and industry leaders, including Drs. Sean Li, Peter Staats, Mehul J. Desai, David Reece, Hemant Kalia, and David Rosenblum, alongside industry figures Mark Stultz, Christopher Conrad, and Cecelia Ruble, visited Capitol Hill to advocate for greater oversight and transparency in independent review organizations. Despite their busy schedules, they recognized the critical need to address the 0% turnover rate in appeals for denied treatments, which disproportionately affects patients seeking alternatives to surgery and opioid medication. Importance of Transparency: The issue extends beyond pain management, impacting patients across various medical fields. While opioid therapy may seem economically favorable initially, the long-term consequences, including delayed care and medication side effects, often outweigh the costs. The group emphasized the importance of an unbiased review for accessible, cutting-edge treatments to improve patient outcomes and reduce overall healthcare expenses. Purpose of the Lobbying Effort: Contrary to pushing any specific company agenda, the initiative aims to highlight the challenges patients and physicians encounter in securing optimal treatment outcomes. For Board Prep, Ultrasound Training and more, visit: Dr. David Rosenblum, a pioneer in interventional pain medicine, particularly in ultrasound- guided procedures and regenerative pain medicine, underscores the necessity of addressing these issues for the benefit of countless patients suffering from chronic pain. Conclusion and Actionable Steps: To schedule a consultation with Dr. Rosenblum, patients can visit www.AABPpain.com or contact the Brooklyn Office at 718-436-7246 or the Garden City Office at 516-482-7246. Stay tuned for more updates on advancements and advocacy efforts in pain management. Outro: Thank you for joining us on this episode of Painexam. Be sure to subscribe for future discussions on navigating the complexities of pain management.
Join Phil Zito in Episode 456 of the Smart Buildings Academy Podcast for an enlightening deep dive into T1L, an emerging Ethernet standard poised to revolutionize building automation systems. As we explore the technicalities and applications of T1L, this episode serves as a comprehensive guide for professionals eager to understand and implement this technology in their projects. From explaining the basics of T1L to discussing practical implementation steps, Phil provides valuable insights into leveraging T1L for both retrofit and new installation projects, ensuring your building automation systems are more efficient, cost-effective, and future-proof. Episode Highlights: Introduction to T1L: Unravel the basics of T1L, an Ethernet physical layer specification under IEEE 802.3cg, offering a promising solution for extending the reach of building automation networks using single twisted-pair cables. Key Features and Benefits: Discover T1L's capability to support 10 Mbps full-duplex communication over 1 kilometer, facilitating cost-effective network expansions and leveraging existing wiring infrastructure. Operational Modes and Applications: Learn about T1L's operational modes, VPP voltage levels, and how they cater to different network requirements, alongside exploring T1L's suitability for various building automation protocols like BACnet IP, LonWorks, and Modbus RTU. Technical Deep Dive: Dive into the technical aspects of T1L, including the importance of impedance matching, the role of PHY transceivers, receivers, and gateways in a T1L setup, and how these components work together to ensure efficient data transmission. Implementing T1L in Projects: Gain practical insights into assessing existing infrastructure for T1L compatibility, planning network architecture, upgrading devices, and configuring network settings to successfully implement T1L in your projects. Interactive Learning: Phil encourages listeners to engage with questions and share their thoughts on T1L's impact on the building automation industry, fostering an interactive learning environment. Whether you're looking to enhance your knowledge on cutting-edge Ethernet standards or seeking practical advice on implementing T1L in your building automation projects, Episode 456 of the Smart Buildings Academy Podcast is an invaluable resource. Dive into the world of T1L with Phil Zito and stay ahead in the rapidly evolving field of building automation.
What A SHOW folks, I almost don't want to write anything in the newsletter to MAKE you listen haha but I will I know many of you don't like listening to be babble. But if you chose one episode to listen to instead of just skimming the show-notes, make it this one. We've had 2 deep dives, one into the exciting world of multi-modalilty, we chatted with the creator of Moondream1, Vik and the co-founders of Prophetic, Wes and Eric about their EEG/fMRI multimodal transformer (that's right!) and then we had a DEEP dive into the new Hourglass Diffusion Transformers with Tanishq from MedArc/Stability. More than 1300 tuned in to the live show
Hydroxyapatite Deposition Disease Dr. Rosenblum discusses shoulder pain, and the pathophysiology of Hydroxyapatite Deposition Disease. He discusses personal experience with infraspinatous tendon tear, and treatments such as NSAIDs, Lidocaine patch and steroid injections of the infraspinatous tendon. Dr. Rosenblum discusses his experience with a failed suprascapular nerve block as well as evidence to support PRP injections and ethical safe care. Dr. Rosenbum also is the NRAP Academy Course director for Ultrasound, Regenerative Pain Medicine and Regional Anesthesia CME Workshops and developed online PainExam, AnesthesiaExam and PMRExam Board Reviews Pain Management Board Review Upcoming Workshops and Events NYC Regional Anesthesia and Pain Ultrasound CME Workshop Saturday, December 16, 2023 7:30 AM NYC Regional Anesthesia and Pain Ultrasound CME Workshop Saturday, January 6, 2024 7:30 AM For up to date Calendar, Click Here! References Valerio Sansone, Emanuele Maiorano, Alessandro Galluzzo & Valerio Pascale (2018) Calcific tendinopathy of the shoulder: clinical perspectives into the mechanisms, pathogenesis, and treatment, Orthopedic Research and Reviews, 10:, 63-72, DOI: 10.2147/ORR.S138225 Seijas R, Ares O, Alvarez P, Cusco X, Garcia-Balletbo M, Cugat R. Platelet-Rich Plasma for Calcific Tendinitis of the Shoulder: A Case Report. Journal of Orthopaedic Surgery. 2012;20(1):126-130. doi:10.1177/230949901202000128 Hegazi T. Hydroxyapatite Deposition Disease: A Comprehensive Review of Pathogenesis, Radiological Findings, and Treatment Strategies. Diagnostics (Basel). 2023 Aug 15;13(16):2678. doi: 10.3390/diagnostics13162678. PMID: 37627938; PMCID: PMC10453434.
Thanks to the over 17,000 people who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out the State of AI Engineering survey! See our Community page for upcoming meetups in SF, Paris and NYC.This episode had good interest on Twitter.Fast.ai's “Practical Deep Learning” courses been watched by over >6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn't always the case. Being non-consensus and right In 2018, Jeremy and Sebastian Ruder published a paper on ULMFiT (Universal Language Model Fine-tuning), a 3-step transfer learning technique for NLP tasks: The paper demonstrated that pre-trained language models could be fine-tuned on a specific task with a relatively small amount of data to achieve state-of-the-art results. They trained a 24M parameters model on WikiText-103 which was beat most benchmarks.While the paper had great results, the methods behind weren't taken seriously by the community: “Everybody hated fine tuning. Everybody hated transfer learning. I literally did tours trying to get people to start doing transfer learning and nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning […] which I was convinced was not the right direction, but who's going to listen to me, cause as you said, I don't have a PhD, not at a university… I don't have a big set of computers to fine tune huge transformer models.”Five years later, fine-tuning is at the center of most major discussion topics in AI (we covered some like fine tuning vs RAG and small models fine tuning), and we might have gotten here earlier if Jeremy had OpenAI-level access to compute and distribution. At heart, Jeremy has always been “GPU poor”:“I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use.”This story is a good reminder of how some of the best ideas are hiding in plain sight; we recently covered RWKV and will continue to highlight the most interesting research that isn't being done in the large labs. Replacing fine-tuning with continued pre-trainingEven though fine-tuning is now mainstream, we still have a lot to learn. The issue of “catastrophic forgetting” and potential solutions have been brought up in many papers: at the fine-tuning stage, the model can forget tasks it previously knew how to solve in favor of new ones. The other issue is apparent memorization of the dataset even after a single epoch, which Jeremy covered Can LLMs learn from a single example? but we still don't have the answer to. Despite being the creator of ULMFiT, Jeremy still professes that there are a lot of open questions on finetuning:“So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do.”He now advocates for "continued pre-training" - maintaining a diversity of data throughout the training process rather than separate pre-training and fine-tuning stages. Mixing instructional data, exercises, code, and other modalities while gradually curating higher quality data can avoid catastrophic forgetting and lead to more robust capabilities (something we covered in Datasets 101).“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data….So yeah, that's now my view, is I think ULMFiT is the wrong approach. And that's why we're seeing a lot of these so-called alignment tax… I think it's actually because people are training them wrong.An example of this phenomena is CodeLlama, a LLaMA2 model finetuned on 500B tokens of code: while the model is much better at code, it's worse on generic tasks that LLaMA2 knew how to solve well before the fine-tuning. In the episode we also dive into all the places where open source model development and research is happening (academia vs Discords - tracked on our Communities list and on our survey), and how Jeremy recommends getting the most out of these diffuse, pseudonymous communities (similar to the Eleuther AI Mafia).Show Notes* Jeremy's Background* FastMail* Optimal Decisions* Kaggle* Enlitic* fast.ai* Rachel Thomas* Practical Deep Learning* fastai for PyTorch* nbdev* fastec2 (the underrated library we describe)* Can LLMs learn from a single example?* the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”.* Sebastian Ruder* Alec Radford* Sylvain Gugger* Stephen Merity* Chris Lattner* Modular.ai / Mojo* Jono Whittaker* Zeiler and Fergus paper* ULM Fit* DAWNBench* Phi-1* Code Llama* AlexNetTimestamps* [00:00:00] Intros and Jeremy's background* [00:05:28] Creating ULM Fit - a breakthrough in NLP using transfer learning* [00:06:32] The rise of GPT and the appeal of few-shot learning over fine-tuning* [00:10:00] Starting Fast.ai to distribute AI capabilities beyond elite academics* [00:14:30] How modern LMs like ChatGPT still follow the ULM Fit 3-step approach* [00:17:23] Meeting with Chris Lattner on Swift for TensorFlow at Google* [00:20:00] Continued pre-training as a fine-tuning alternative* [00:22:16] Fast.ai and looking for impact vs profit maximization* [00:26:39] Using Fast.ai to create an "army" of AI experts to improve their domains* [00:29:32] Fast.ai's 3 focus areas - research, software, and courses* [00:38:42] Fine-tuning memorization and training curve "clunks" before each epoch* [00:46:47] Poor training and fine-tuning practices may be causing alignment failures* [00:48:38] Academia vs Discords* [00:53:41] Jeremy's high hopes for Chris Lattner's Mojo and its potential* [01:05:00] Adding capabilities like SQL generation through quick fine-tuning* [01:10:12] Rethinking Fast.ai courses for the AI-assisted coding era* [01:14:53] Rapid model development has created major technical debt* [01:17:08] Lightning RoundAI Summary (beta)This is the first episode we're trying this. Here's an overview of the main topics before you dive in the transcript. * Jeremy's background and philosophies on AI* Studied philosophy and cognitive science in college* Focused on ethics and thinking about AI even 30 years ago* Believes AI should be accessible to more people, not just elite academics/programmers* Created fast.ai to make deep learning more accessible* Development of transfer learning and ULMFit* Idea of transfer learning critical for making deep learning accessible* ULMFit pioneered transfer learning for NLP* Proposed training general language models on large corpora then fine-tuning - this became standard practice* Faced skepticism that this approach would work from NLP community* Showed state-of-the-art results on text classification soon after trying it* Current open questions around fine-tuning LLMs* Models appear to memorize training data extremely quickly (after 1 epoch)* This may hurt training dynamics and cause catastrophic forgetting* Unclear how best to fine-tune models to incorporate new information/capabilities* Need more research on model training dynamics and ideal data mixing* Exciting new developments* Mojo and new programming languages like Swift could enable faster model innovation* Still lots of room for improvements in computer vision-like innovations in transformers* Small models with fine-tuning may be surprisingly capable for many real-world tasks* Prompting strategies enable models like GPT-3 to achieve new skills like playing chess at superhuman levels* LLMs are like computer vision in 2013 - on the cusp of huge new breakthroughs in capabilities* Access to AI research* Many key convos happen in private Discord channels and forums* Becoming part of these communities can provide great learning opportunities* Being willing to do real work, not just talk about ideas, is key to gaining access* The future of practical AI* Coding becoming more accessible to non-programmers through AI assistance* Pre-requisite programming experience for learning AI may no longer be needed* Huge open questions remain about how to best train, fine-tune, and prompt LLMsTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:21]Swyx: Hey, and today we have in the remote studio, Jeremy Howard all the way from Australia. Good morning. [00:00:27]Jeremy: The remote studio, also known as my house. Good morning. Nice to see you. [00:00:32]Swyx: Nice to see you too. I'm actually very used to seeing you in your mask as a message to people, but today we're mostly audio. But thank you for doing the very important public service of COVID awareness. It was a pleasure. [00:00:46]Jeremy: It was all very annoying and frustrating and tedious, but somebody had to do it. [00:00:52]Swyx: Somebody had to do it, especially somebody with your profile. I think it really drives home the message. So we tend to introduce people for them and then ask people to fill in the blanks on the personal side. Something I did not know about you was that you graduated with a BA in philosophy from the University of Melbourne. I assumed you had a PhD. [00:01:14]Jeremy: No, I mean, I barely got through my BA because I was working 80 to 100 hour weeks at McKinsey and Company from 19 years old onwards. So I actually didn't attend any lectures in second and third year university. [00:01:35]Swyx: Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. [00:01:39]Jeremy: I took two weeks off before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight, I would go to all my professors and say, oh, I was meant to be in your class this semester and I didn't quite turn up. Were there any assignments I was meant to have done, whatever. I can't believe all of them let me basically have it. They basically always would say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university, but. [00:02:12]Swyx: Well, it shows that, I guess, you min-maxed the opportunities. That definitely was a precursor. [00:02:18]Jeremy: I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was ethics and cognitive science. And it's kind of really amazing that it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. [00:02:38]Swyx: A lot of, yeah, a lot of relevant conversations there. So you were a consultant for a while and then in the magical month of June 1989, you founded both Optimal Decisions and Fastmeal, which I also briefly used. So thank you for that. [00:02:53]Jeremy: Oh, good for you. Yeah. Cause I had read the statistics, which is that like 90% or something of small businesses fail. So I thought if I start two businesses, I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd, but anyway. [00:03:10]Swyx: And then you were president and chief scientist at Kaggle, which obviously is the sort of composition platform of machine learning. And then Enlitic, where you were working on using deep learning to improve medical diagnostics and clinical decisions. Yeah. [00:03:28]Jeremy: I was actually the first company to use deep learning in medicine, so I kind of founded the field. [00:03:33]Swyx: And even now that's still like a pretty early phase. And I actually heard you on your new podcast with Tanish, where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. [00:03:47]Jeremy: Maybe he's too old to be called a prodigy now, ex-prodigy. No, no. [00:03:51]Swyx: I think he still counts. And anyway, just to round out the bio, you have a lot more other credentials, obviously, but most recently you started Fast.ai, which is still, I guess, your primary identity with Rachel Thomas. So welcome. [00:04:05]Jeremy: Yep. [00:04:06]Swyx: Thanks to my wife. Thank you. Yeah. Doing a lot of public service there with getting people involved in AI, and I can't imagine a better way to describe it than fast, fast.ai. You teach people from nothing to stable diffusion in seven weeks or something, and that's amazing. Yeah, yeah. [00:04:22]Jeremy: I mean, it's funny, you know, when we started that, what was that, like 2016 or something, the idea that deep learning was something that you could make more accessible was generally considered stupid. Everybody knew that deep learning was a thing that you got a math or a computer science PhD, you know, there was one of five labs that could give you the appropriate skills and that you would join, yeah, basically from one of those labs, you might be able to write some papers. So yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it. And we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs, that seemed like A, a big waste and B, kind of dangerous. [00:05:28]Swyx: Yeah. [00:05:29]Alessio: And, you know, well, I just wanted to know one thing on your bio that at Kaggle, you were also the top rank participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you were clearly building something that you knew a lot about, which is awesome. Talking about deep learning, you created, published a paper on ULM fit, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I've read back on the paper and you turned this model, AWD LSTM, which I did the math and it was like 24 to 33 million parameters, depending on what training data set you use today. That's kind of like not even small, it's like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and what people were working towards? [00:06:32]Jeremy: Yeah, the whole thing was a contrarian take, you know. So okay, so we started Fast.ai, my wife and I, and we thought, yeah, so we're trying to think, okay, how do we make it more accessible? So when we started thinking about it, it was probably 2015 and then 2016, we started doing something about it. Why is it inaccessible? Okay, well, A, no one knows how to do it other than a few number of people. And then when we asked those few number of people, well, how do you actually get good results? They would say like, oh, it's like, you know, a box of tricks that aren't published. So you have to join one of the labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but thankfully there was Theano and rappers and particularly Lasagna, the rapper, but yeah, not much software around, not much in the way of data sets, you know, very hard to get started in terms of the compute. Like how do you get that set up? So yeah, no, everything was kind of inaccessible. And you know, as we started looking into it, we had a key insight, which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about, nobody talks about called transfer learning, where you take somebody else's model, where they already figured out like how to detect edges and gradients and corners and text and whatever else, and then you can fine tune it to do the thing you want to do. And we thought that's the key. That's the key to becoming more accessible in terms of compute and data requirements. So when we started Fast.ai, we focused from day one on transfer learning. Lesson one, in fact, was transfer learning, literally lesson one, something not normally even mentioned in, I mean, there wasn't much in the way of courses, you know, the courses out there were PhD programs that had happened to have recorded their lessons and they would rarely mention it at all. We wanted to show how to do four things that seemed really useful. You know, work with vision, work with tables of data, work with kind of recommendation systems and collaborative filtering and work with text, because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And no one was doing anything much useful with text. Everybody was talking about word2vec, you know, like king plus queen minus woman and blah, blah, blah. It was like cool experiments, but nobody's doing anything like useful with it. NLP was all like lemmatization and stop words and topic models and bigrams and SPMs. And it was really academic and not practical. But I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done cognitive science at university, where we talked a lot about the CELS Chinese room experiment. This idea of like, what if there was somebody that could kind of like, knew all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese and they were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them and then they do all their rules and then they pass back a piece of paper with Chinese back. And this room with a person in is actually fantastically good at answering any question you give them written in Chinese. You know, do they understand Chinese? And is this, you know, something that's intelligently working with Chinese? Ever since that time, I'd say the most thought, to me, the most thoughtful and compelling philosophical response is yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But you know, if it looks like a duck and acts like a duck, it's a duck, you know, or to all intents and purposes. And so I always kind of thought, you know, so this is basically a kind of analysis of the limits of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw to then generate text in response to text, it could appear to be intelligent, you know. And whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know, what I learned about the universal approximation theorem and stuff, and I started thinking like, oh, I wonder if like a neural net could ever get big enough and take in enough data to be a Chinese room experiment. You know, with that background and this kind of like interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, oh, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing, you know, they were not a popular or well-known thing, but they were a thing. But language models exist to this idea that you could train a model to fill in the gaps. Or actually in those days it wasn't fill in the gaps, it was finish a string. And in fact, Andrej Karpathy did his fantastic RNN demonstration from this at a similar time where he showed like you can have it ingest Shakespeare and it will generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia effectively, to do it quite accurately quite often? I thought, geez, it would actually have to know a lot about the world, you know, it'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways and that causes proceed effects and that, you know, when there are animals and there are people and that people can be in certain positions during certain timeframes and then you could, you know, all that together, you can then finish a sentence like this was signed into law in 2016 by US President X and it would fill in the gap, you know. So that's why I tried to create what in those days was considered a big language model trained on the entirety on Wikipedia, which is that was, you know, a bit unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure, based on our work with transfer learning and vision, that I could then suck out those latent capabilities by transfer learning, you know, by fine-tuning it on a task data set or whatever. So we generated this three-step system. So step one was train a language model on a big corpus. Step two was fine-tune a language model on a more curated corpus. And step three was further fine-tune that model on a task. And of course, that's what everybody still does today, right? That's what ChatGPT is. And so the first time I tried it within hours, I had a new state-of-the-art academic result on IMDB. And I was like, holy s**t, it does work. And so you asked, to what degree was this kind of like pushing against the established wisdom? You know, every way. Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like, oh, maybe. Everybody was like, it definitely won't work. NLP is much more complicated than vision. Language is a much more vastly complicated domain. You know, and you've got problems like the grounding problem. We know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. [00:15:10]Alessio: Jeremy, had people not tried because it was like too complicated to actually get the data and like set up the training? Or like, were people just lazy and kind of like, hey, this is just not going to work? [00:15:20]Jeremy: No, everybody wasn't lazy. So like, so the person I thought at that time who, you know, there were two people I thought at that time, actually, who were the strongest at language models were Stephen Merity and Alec Radford. And at the time I didn't know Alec, but I, after we had both, after I'd released ULM Fit and he had released GPT, I organized a chat for both of us with Kate Metz in the New York Times. And Kate Metz answered, sorry, and Alec answered this question for Kate. And Kate was like, so how did, you know, GPT come about? And he said, well, I was pretty sure that pre-training on a general large corpus wouldn't work. So I hadn't tried it. And then I read ULM Fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similar with, with Stephen, you know, I asked Stephen Merity, like, why don't we just find, you know, take your AWD-ASTLM and like train it on all of Wikipedia and fine tune it? And he's kind of like, well, I don't think that's going to really lie. Like two years before I did a very popular talk at KDD, the conference where everybody in NLP was in the audience. I recognized half the faces, you know, and I told them all this, I'm sure transfer learning is the key. I'm sure ImageNet, you know, is going to be an NLP thing as well. And, you know, everybody was interested and people asked me questions afterwards and, but not just, yeah, nobody followed up because everybody knew that it didn't work. I mean, even like, so we were scooped a little bit by Dai and Lee, Kwok Lee at Google. They had, they had, I already, I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose, large language model on a general purpose corpus. They only ever tested a domain specific corpus. And I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it was that kind of 30 years of mulling over the, the cell Chinese room experiment that had convinced me that it probably would work. I don't know. Yeah. [00:17:48]Alessio: Interesting. I just dug up Alec announcement tweet from 2018. He said, inspired by Cobe, Elmo, and Yola, I'm fit. We should have a single transformer language model can be fine tuned to a wide variety. It's interesting because, you know, today people think of AI as the leader, kind of kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years, people think of it as an overnight success, but obviously it took a while. [00:18:16]Swyx: Yeah. Yeah. [00:18:17]Jeremy: No, I mean, absolutely. And I'll say like, you know, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to, to ULM fit. You know, there was these kind of like, so there was a lot of, there was a lot of activity at the same time as ULM fits released. So there was, um, so before it, as Brian McCann, I think at Salesforce had come out with this neat model that did a kind of multitask learning, but again, they didn't create a general fine tune language model first. There was Elmo, um, which I think was a lip, you know, actually quite a few months after the first ULM fit example, I think. Um, but yeah, there was a bit of this stuff going on. And the problem was everybody was doing, and particularly after GPT came out, then everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning. Everybody hated transfer learning. And like, I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years and, and not to be honest, I mean, I'm a bit sad about this now, but I kind of got so disappointed and dissuaded by like, it felt like these bigger lab, much bigger labs, you know, like fast AI had only ever been just me and Rachel were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced was the wrong way to do it. And so, yeah, for years people were really focused on getting better at zero shot and few shots and it wasn't until, you know, this key idea of like, well, let's take the ULM fit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's fine tune on an instruction corpus. And then in step three, rather than fine tuning on a reasonably specific task classification, let's fine tune on a, on a RLHF task classification. And so that was really, that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction, but who's going to listen to me, you know, cause I, as you said, I don't have a PhD, not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So yeah, it was definitely difficult. It's always been hard. You know, it's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata and it'd be like, oh, this big bank has this huge room full of computers and they have like terabytes of data available, you know, at the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then of course, students coming out of their PhDs and stuff, that's where they want to go work because that's where they read about. And to me, it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. [00:22:16]Alessio: That's awesome. And it's great to hear, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love this conversation. And the other thing before we jump into Fast.AI, actually, a lot of people that I know, they run across a new architecture and whatnot, they're like, I got to start a company and raise a bunch of money and do all of this stuff. And say, you were like, I want everybody to have access to this. Why was that the case for you? Was it because you already had a successful venture in like FastMail and you were more interested in that? What was the reasoning? [00:22:52]Jeremy: It's a really good question. So I guess the answer is yes, that's the reason why. So when I was a teenager, I thought it would be really cool to like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word VC. And I didn't really know what any of those things were really until after we started Kaggle, to be honest. Even the way it started to what we now call startups. I just thought they were just small businesses. You know, they were just companies. So yeah, so those two companies were FastMail and Optimal Decisions. FastMail was the first kind of synchronized email provider for non-businesses. So something you can get your same email at home, on your laptop, at work, on your phone, whatever. And then Optimal Decisions invented a new approach to insurance pricing. Something called profit-optimized insurance pricing. So I saw both of those companies, you know, after 10 years. And at that point, I had achieved the thing that as a teenager I had wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mom saying to me, you must be so proud. You know, because she remembered my dream. She's like, you've done it. And I kind of reflected and I was like, I'm not proud at all. You know, like people quite liked FastMail. You know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah, I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually, I haven't really helped the world very much. You know, maybe in the insurance case I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure would have a lot of value. So, you know, I took some time off. I wasn't sure if I'd ever work again, actually. I didn't particularly want to, because it felt like, yeah, it felt like such a disappointment. And, but, you know, and I didn't need to. I had enough money. Like, I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work, they all worked ridiculously hard, you know, and constantly put themselves in extremely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, Kaggle came along and I mainly kind of did that just because it was fun and interesting to hang out with interesting people. But, you know, with Fast.ai in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know? And so we definitely saw the world going in a potentially pretty dystopian direction if the world's most powerful technology was controlled by a small group of elites. So we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've helped make it a little bit less likely. So we've done our bit. [00:26:39]Swyx: You've shown that it's possible. And I think your constant advocacy, your courses, your research that you publish, you know, just the other day you published a finding on, you know, learning that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howards, furthering your mission with, you know, you don't have to do everything by yourself is what I'm saying. No, definitely. Definitely. [00:27:10]Jeremy: You know, that was a big takeaway from like, analytic was analytic. It definitely felt like we had to do everything ourselves. And I kind of, I wanted to solve medicine. I'll say, yeah, okay, solving medicine is actually quite difficult. And I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was definitely the other piece was like, yeah, you know, can we create an army of passionate domain experts who can change their little part of the world? And that's definitely happened. Like I find nowadays, at least half the time, probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say, they say, yeah, I got my start with fast.ai. So it's definitely, I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff, which, you know, there's lots of alumni there. And they say, oh my God, I got here. And like half of the people are fast.ai alumni. So it's fantastic. [00:28:13]Swyx: Yeah. [00:28:14]Jeremy: Actually, Andre Kapathy grabbed me when I saw him at NeurIPS a few years ago. And he was like, I have to tell you, thanks for the fast.ai courses. When people come to Tesla and they need to know more about deep learning, we always send them to your course. And the OpenAI Scholars Program was doing the same thing. So it's kind of like, yeah, it's had a surprising impact, you know, that's just one of like three things we do is the course, you know. [00:28:40]Swyx: Yes. [00:28:40]Jeremy: And it's only ever been at most two people, either me and Rachel or me and Sylvia nowadays, it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to make an impact. [00:28:56]Swyx: Yeah. So just to reintroduce fast.ai for people who may not have dived into it much, there is the courses that you do. There is the library that is very well loved. And I kind of think of it as a nicer layer on top of PyTorch that people should start with by default and use it as the basis for a lot of your courses. And then you have like NBDev, which I don't know, is that the third one? [00:29:27]Jeremy: Oh, so the three areas were research, software, and courses. [00:29:32]Swyx: Oh, sorry. [00:29:32]Jeremy: So then in software, you know, fast.ai is the main thing, but NBDev is not far behind. But then there's also things like FastCore, GHAPI, I mean, dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden, actually. Some of them I should try to do a better job of telling people about. [00:30:01]Swyx: What are you thinking about? Yeah, what's on the course of my way? Oh, I don't know, just like little things. [00:30:04]Jeremy: Like, for example, for working with EC2 and AWS, I created a FastEC2 library, which I think is like way more convenient and nice to use than anything else out there. And it's literally got a whole autocomplete, dynamic autocomplete that works both on the command line and in notebooks that'll like auto-complete your instance names and everything like that. You know, just little things like that. I try to make like, when I work with some domain, I try to make it like, I want to make it as enjoyable as possible for me to do that. So I always try to kind of like, like with GHAPI, for example, I think that GitHub API is incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So like GHAPI, like FastEC2, it like autocompletes both at the command line or in a notebook or whatever, like literally the entire GitHub API. The entire thing is like, I think it's like less than 100K of code because it actually, as far as I know, the only one that grabs it directly from the official open API spec that GitHub produces. And like if you're in GitHub and you just type an API, you know, autocomplete API method and hit enter, it prints out the docs with brief docs and then gives you a link to the actual documentation page. You know, GitHub Actions, I can write now in Python, which is just so much easier than writing them in TypeScript and stuff. So, you know, just little things like that. [00:31:40]Swyx: I think that's an approach which more developers took to publish some of their work along the way. You described the third arm of FastAI as research. It's not something I see often. Obviously, you do do some research. And how do you run your research? What are your research interests? [00:31:59]Jeremy: Yeah, so research is what I spend the vast majority of my time on. And the artifacts that come out of that are largely software and courses. You know, so to me, the main artifact shouldn't be papers because papers are things read by a small exclusive group of people. You know, to me, the main artifacts should be like something teaching people, here's how to use this insight and here's software you can use that builds it in. So I think I've only ever done three first-person papers in my life, you know, and none of those are ones I wanted to do. You know, they were all ones that, like, so one was ULM Fit, where Sebastian Ruder reached out to me after seeing the course and said, like, you have to publish this as a paper, you know. And he said, I'll write it. He said, I want to write it because if I do, I can put it on my PhD and that would be great. And it's like, okay, well, I want to help you with your PhD. And that sounds great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the Fast.ai library paper, which again, somebody reached out and said, please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't, other than that, I've never written a first author paper. So the research is like, well, so for example, you know, Dawn Bench was a competition, which Stanford ran a few years ago. It was kind of the first big competition of like, who can train neural nets the fastest rather than the most accurate. And specifically it was who can train ImageNet the fastest. And again, this was like one of these things where it was created by necessity. So Google had just released their TPUs. And so I heard from my friends at Google that they had put together this big team to smash Dawn Bench so that they could prove to people that they had to use Google Cloud and use their TPUs and show how good their TPUs were. And we kind of thought, oh s**t, this would be a disaster if they do that, because then everybody's going to be like, oh, deep learning is not accessible. [00:34:20]Swyx: You know, to actually be good at it, [00:34:21]Jeremy: you have to be Google and you have to use special silicon. And so, you know, we only found out about this 10 days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next 10 days and just tried to crunch through and try to use all of our best ideas that had come from our research. And so particularly progressive resizing, just basically train mainly on small things, train on non-square things, you know, stuff like that. And so, yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, oh s**t, you know, this is going to show that you have to be Google and have TPUs to being like, oh my God, even the little guy can do deep learning. So that's an example of the kind of like research artifacts we do. And yeah, so all of my research is always, how do we do more with less, you know? So how do we get better results with less data, with less compute, with less complexity, with less education, you know, stuff like that. So ULM fits obviously a good example of that. [00:35:37]Swyx: And most recently you published, can LLMs learn from a single example? Maybe could you tell the story a little bit behind that? And maybe that goes a little bit too far into the learning of very low resource, the literature. [00:35:52]Jeremy: Yeah, yeah. So me and my friend, Jono Whittaker, basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is, can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within nine hours. And Kaggle's very, very limited. So you've only got 14 gig RAM, only two CPUs, and a small, very old GPU. So this is cool, you know, if you can do well at this, then this is a good example of like, oh, you can do more with less. So yeah, Jono and I were playing around with fine tuning, of course, transfer learning, pre-trained language models. And we saw this, like, so we always, you know, plot our losses as we go. So here's another thing we created. Actually, Sylvain Guuger, when he worked with us, created called fast progress, which is kind of like TQEDM, but we think a lot better. So we look at our fast progress curves, and they kind of go down, down, down, down, down, down, down, a little bit, little bit, little bit. And then suddenly go clunk, and they drop. And then down, down, down, down, down a little bit, and then suddenly clunk, they drop. We're like, what the hell? These clunks are occurring at the end of each epoch. So normally in deep learning, this would be, this is, you know, I've seen this before. It's always been a bug. It's always turned out that like, oh, we accidentally forgot to turn on eval mode during the validation set. So I was actually learning then, or, oh, we accidentally were calculating moving average statistics throughout the epoch. So, you know, so it's recently moving average or whatever. And so we were using Hugging Face Trainer. So, you know, I did not give my friends at Hugging Face the benefit of the doubt. I thought, oh, they've fucked up Hugging Face Trainer, you know, idiots. Well, you'll use the Fast AI Trainer instead. So we switched over to Learner. We still saw the clunks and, you know, that's, yeah, it shouldn't really happen because semantically speaking in the epoch, isn't like, it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one. So there shouldn't be a clunk, you know. So I kind of asked around on the open source discords. That's like, what's going on here? And everybody was just like, oh, that's just what, that's just what these training curves look like. Those all look like that. Don't worry about it. And I was like, oh, are you all using Trainer? Yes. Oh, well, there must be some bug with Trainer. And I was like, well, we also saw it in Learner [00:38:42]Swyx: and somebody else is like, [00:38:42]Jeremy: no, we've got our own Trainer. We get it as well. They're just like, don't worry about it. It's just something we see. It's just normal. [00:38:48]Swyx: I can't do that. [00:38:49]Jeremy: I can't just be like, here's something that's like in the previous 30 years of neural networks, nobody ever saw it. And now suddenly we see it. [00:38:57]Swyx: So don't worry about it. [00:38:59]Jeremy: I just, I have to know why. [00:39:01]Swyx: Can I clarify? This is, was everyone that you're talking to, were they all seeing it for the same dataset or in different datasets? [00:39:08]Jeremy: Different datasets, different Trainers. They're just like, no, this is just, this is just what it looks like when you fine tune language models. Don't worry about it. You know, I hadn't seen it before, but I'd been kind of like, as I say, I, you know, I kept working on them for a couple of years after ULM fit. And then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of Lama fine tuning world, where else these guys had been, you know, doing it since day one. [00:39:49]Swyx: It was only a few months ago, [00:39:51]Jeremy: but it's still quite a bit of time. So, so yeah, they're just like, no, this is all what we see. [00:39:56]Swyx: Don't worry about it. [00:39:56]Jeremy: So yeah, I, I've got a very kind of like, I don't know, I've just got this brain where I have to know why things are. And so I kind of, I ask people like, well, why, why do you think it's happening? And they'd be like, oh, it would pretty obviously, cause it's like memorize the data set. It's just like, that can't be right. It's only seen it once. Like, look at this, the loss has dropped by 0.3, 0.3, which is like, basically it knows the answer. And like, no, no, it's just, it is, it's just memorize the data set. So yeah. So look, Jono and I did not discover this and Jono and I did not come up with a hypothesis. You know, I guess we were just the ones, I guess, who had been around for long enough to recognize that like, this, this isn't how it's meant to work. And so we, we, you know, and so we went back and like, okay, let's just run some experiments, you know, cause nobody seems to have actually published anything about this. [00:40:51]Well, not quite true.Some people had published things, but nobody ever actually stepped back and said like, what the hell, you know, how can this be possible? Is it possible? Is this what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time. It's like, okay, if this hypothesis is correct, that it's memorized in the training set, then we ought to see blah, under conditions, blah, but not under these conditions. And so we ran a bunch of experiments and all of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big data set, you know, which in hindsight, it's not totally surprising because the theory, remember, of the ULMFiT theory was like, well, it's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize it as like, oh, that's just a version of this. So it's not so crazy, you know, but it is, it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because like, it doesn't even matter. Like maybe it's fine. Like maybe it's fine that it's memorized the data set after one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. [00:42:20]Swyx: That's fine. [00:42:22]Jeremy: Don't, you know, don't, I keep telling people, don't track validation loss, track validation accuracy because at least that will still be useful. Just another thing that's got lost since ULMFiT, nobody tracks accuracy of language models anymore. But you know, it'll still keep learning and it does, it does keep improving. But is it worse? You know, like, is it like, now that it's kind of memorized it, it's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do, like nobody really knows whether this memorization thing is, it's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably, yeah, I have a feeling it's messing up training dynamics as well. [00:43:13]Swyx: And does it come at the cost of catastrophic forgetting as well, right? Like, which is the other side of the coin. [00:43:18]Jeremy: It does to some extent, like we know it does, like look at Code Llama, for example. So Code Llama was a, I think it was like a 500 billion token fine tuning of Llama 2 using code. And also pros about code that Meta did. And honestly, they kind of blew it because Code Llama is good at coding, but it's bad at everything else, you know, and it used to be good. Yeah, I was pretty sure it was like, before they released it, me and lots of people in the open source discords were like, oh my God, you know, we know this is coming, Jan Lukinsk saying it's coming. I hope they kept at least like 50% non-code data because otherwise it's going to forget everything else. And they didn't, only like 0.3% of their epochs were non-code data. So it did, it forgot everything else. So now it's good at code and it's bad at everything else. So we definitely have catastrophic forgetting. It's fixable, just somebody has to do, you know, somebody has to spend their time training a model on a good mix of data. Like, so, okay, so here's the thing. Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it. [00:44:36]Jeremy: And that's because people are using it in a way different to why I created it. You know, I created it thinking the task-specific models would be more specific. You know, it's like, oh, this is like a sentiment classifier as an example of a task, you know, but the tasks now are like a, you know, RLHF, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task and it's a really cool approach. And so we see, for example, RLHF also breaks models like, you know, like GPT-4, RLHDEFT, we know from kind of the work that Microsoft did, you know, the pre, the earlier, less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me, the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter, you stop training on low quality data, because that's probably fine to forget how to write badly, maybe. So yeah, that's now my view, is I think ULM fit is the wrong approach. And that's why we're seeing a lot of these, you know, so-called alignment tacks and this view of like, oh, a model can't both code and do other things. And, you know, I think it's actually because people are training them wrong. [00:46:47]Swyx: Yeah, well, I think you have a clear [00:46:51]Alessio: anti-laziness approach. I think other people are not as good hearted, you know, they're like, [00:46:57]Swyx: hey, they told me this thing works. [00:46:59]Alessio: And if I release a model this way, people will appreciate it, I'll get promoted and I'll kind of make more money. [00:47:06]Jeremy: Yeah, and it's not just money. It's like, this is how citations work most badly, you know, so if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot and few shot learning, everybody was writing about that. Or, you know, with image generation, everybody just was writing about GANs, you know, and I was trying to say like, no, GANs are not the right approach. You know, and I showed again through research that we demonstrated in our videos that you can do better than GANs, much faster and with much less data. And nobody cared because again, like if you want to get published, you write a GAN paper that slightly improves this part of GANs and this tiny field, you'll get published, you know. So it's, yeah, it's not set up for real innovation. It's, you know, again, it's really helpful for me, you know, I have my own research lab with nobody telling me what to do and I don't even publish. So it doesn't matter if I get citations. And so I just write what I think actually matters. I wish there was, and, you know, and actually places like OpenAI, you know, the researchers there can do that as well. It's a shame, you know, I wish there was more academic, open venues in which people can focus on like genuine innovation. [00:48:38]Swyx: Twitter, which is unironically has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discords. I don't know if it's too, I don't know if it's a pusher to ask like what discords are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of Luther AI, which is a very hard bit. And, you know, like what is the new Luther? And you actually shouted out the alignment lab AI discord in your blog post. And that was the first time I even knew, like I saw them on Twitter, never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay, yeah. [00:49:23]Jeremy: And then even then, if you do know about that and you go there, it'll look like it's totally dead. And that's because unfortunately, nearly all the discords, nearly all of the conversation happens in private channels. You know, and that's, I guess. [00:49:35]Swyx: How does someone get into that world? Because it's obviously very, very instructive, right? [00:49:42]Jeremy: You could just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so like, at least, to be fair, one of Emma's bustling channels is private. [00:49:57]Swyx: I guess. [00:49:59]Jeremy: So I'm just thinking. [00:50:01]Swyx: It's just the nature of quality discussion, right? Yeah, I guess when I think about it, [00:50:05]Jeremy: I didn't have any private discussions on our discord for years, but there was a lot of people who came in with like, oh, I just had this amazing idea for AGI. If you just thought about like, if you imagine that AI is a brain, then we, you know, this just, I don't want to talk about it. You know, I don't want to like, you don't want to be dismissive or whatever. And it's like, oh, well, that's an interesting comment, but maybe you should like, try training some models first to see if that aligns with your intuition. Like, oh, but how could I possibly learn? It's like, well, we have a course, just actually spend time learning. Like, you know, anyway. And there's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit, that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join a Discord and you can hit the like participants or community or whatever button, you can see who's in it. And then you'll see at the top, who the admins or moderators or people in the dev role are. And just DM one of them and say like, oh, here's my GitHub. Well, here's some blog posts I wrote. You know, I'm interested in talking about this, you know, can I join the private channels? And I've never heard of anybody saying no. I will say, you know, Alutha's all pretty open. So you can do the Alutha Discord still. You know, one problem with the Alutha Discord is it's been going on for so long that it's like, it's very inside baseball. It's quite hard to get started. Yeah. Carpa AI looks, I think it's all open. That's just less stability. That's more accessible. [00:52:03]Swyx: Yeah. [00:52:04]Jeremy: There's also just recently, now it's research that does like the Hermes models and data set just opened. They've got some private channels, but it's pretty open, I think. You mentioned Alignment Lab, that one it's all the interesting stuff is on private channels. So just ask. If you know me, ask me, cause I've got admin on that one. There's also, yeah, OS Skunkworks, OS Skunkworks AI is a good Discord, which I think it's open. So yeah, they're all pretty good. [00:52:40]Swyx: I don't want you to leak any, you know, Discords that don't want any publicity, but this is all helpful. [00:52:46]Jeremy: We all want people, like we all want people. [00:52:49]Swyx: We just want people who like, [00:52:51]Jeremy: want to build stuff, rather than people who, and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like, here's a really small kind of task that as somebody who doesn't know anything is going to take you a really long time to do, but it would still be helpful. Then, and then you go and do it. That would be great. The truth is, yeah, [00:53:19]Swyx: like, I don't know, [00:53:20]Jeremy: maybe 5% of people who come in with great enthusiasm and saying that they want to learn and they'll do anything. [00:53:25]Swyx: And then somebody says like, [00:53:25]Jeremy: okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity. And everybody will then want to help you do more work. [00:53:41]Swyx: So yeah. [00:53:41]Jeremy: So just, yeah, just do work and people will want to support you. [00:53:47]Alessio: Our Discord used to be referral only for a long time. We didn't have a public invite and then we opened it and they're kind of like channel gating. Yeah. A lot of people just want to do, I remember it used to be like, you know, a forum moderator. [00:54:00]Swyx: It's like people just want to do [00:54:01]Alessio: like drive-by posting, [00:54:03]Swyx: you know, and like, [00:54:03]Alessio: they don't want to help the community. They just want to get their question answered. [00:54:07]Jeremy: I mean, the funny thing is our forum community does not have any of that garbage. You know, there's something specific about the low latency thing where people like expect an instant answer. And yeah, we're all somehow in a forum thread where they know it's like there forever. People are a bit more thoughtful, but then the forums are less active than they used to be because Discord has got more popular, you know? So it's all a bit of a compromise, you know, running a healthy community is, yeah, it's always a bit of a challenge. All right, we got so many more things [00:54:47]Alessio: we want to dive in, but I don't want to keep you here for hours. [00:54:50]Swyx: This is not the Lex Friedman podcast [00:54:52]Alessio: we always like to say. One topic I would love to maybe chat a bit about is Mojo, modular, you know, CrystalLiner, not many of you on the podcast. So we want to spend a little time there. You recently did a hacker's guide to language models and you ran through everything from quantized model to like smaller models, larger models, and all of that. But obviously modular is taking its own approach. Yeah, what got you excited? I know you and Chris have been talking about this for like years and a lot of the ideas you had, so. [00:55:23]Jeremy: Yeah, yeah, yeah, yeah, no, absolutely. So I met Chris, I think it was at the first TensorFlow Dev Summit. And I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know, you know, certainly nothing had been mentioned. So I, you know, I admired him from afar with LLVM and Swift and whatever. And so I saw him walk into the courtyard at Google. It's just like, oh s**t, man, that's Chris Latner. I wonder if he would lower his standards enough to talk to me. Well, worth a try. So I caught up my courage because like nobody was talking to him. He looked a bit lost and I wandered over and it's like, oh, you're Chris Latner, right? It's like, what are you doing here? What are you doing here? And I was like, yeah, yeah, yeah. It's like, oh, I'm Jeremy Howard. It's like, oh, do you do some of this AI stuff? And I was like, yeah, yeah, I like this AI stuff. Are you doing AI stuff? It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool. And it's like, wow. So like, I spent the next half hour just basically brain dumping all the ways in which AI was stupid to him. And he listened patiently. And I thought he probably wasn't even remember or care or whatever. But yeah, then I kind of like, I guess I re-caught up with him a few months later. And it's like, I've been thinking about everything you said in that conversation. And he like narrated back his response to every part of it, projects he was planning to do. And it's just like, oh, this dude follows up. Holy s**t. And I was like, wow, okay. And he was like, yeah, so we're going to create this new thing called Swift for TensorFlow. And it's going to be like, it's going to be a compiler with auto differentiation built in. And blah, blah, blah. And I was like, why would that help? [00:57:10]Swyx: You know, why would you? [00:57:10]Jeremy: And he was like, okay, with a compiler during the forward pass, you don't have to worry about saving context, you know, because a lot will be optimized in the backward. But I was like, oh my God. Because I didn't really know much about compilers. You know, I spent enough to kind of like, understand the ideas, but it hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. I was like, wow, that's amazing. Okay, you do know, right, that nobody's going to use this unless it's like usable. It's like, yeah, I know, right. So I was thinking you should create like a fast AI for this. So, okay, but I don't even know Swift. And he was like, well, why don't you start learning it? And if you have any questions, ask me. It's just like, holy s**t. Like, not only has Chris Latner lowered his standards enough to talk to me, but he's offering me personal tutoring on the programming language that he made. So I was just like, I'm not g
Dr. Rosenblum explores Peptides, the various types, usess and applications for health and wellness. Upcoming Pain Management Conferences Upcoming Workshops and Events NYC Regional Anesthesia and Pain Ultrasound CME Workshop Saturday, October 28, 2023 8:00 AM Charleston, SC Regional Anesthesia and Pain Ultrasound CME Workshop Sunday, October 29, 2023 9:00 AM NRAP Academy: Regenerative Pain Medicine Course NYC Saturday, November 11, 2023 8:00 AM NYC Regional Anesthesia and Pain Ultrasound CME Workshop Saturday, December 16, 2023 7:30 AM NYC Regional Anesthesia and Pain Ultrasound CME Workshop Saturday, January 6, 2024 7:30 AM For up to date Calendar, Click Here! References https://healthgains.com/wellness/peptide-therapy/#Selank https://www.nature.com/articles/s41392-022-00904-4
Claim CME Credit The CE experience for this Podcast is powered by CMEfy - click here to reflect and earn credits: https://earnc.me/ATmqM6 David Rosenblum, MD Garden City and Brooklyn Pain Physician, world renown for his work on the PainExam Podcast, Board Review and NRAP Academy's Continuing Medical Education Programs, discusses Ketamine infusions, optimal infusion protocols and the evidence or lack of to support them. Ketamine infusions have been used for chronic neuropathic pain, CRPS and depression. Dr. Rosenblum is accepting new patients and consultations could be scheduled by visiting www.AABPPain.com or calling 718 436 7246 or 516 482 7246. Pain Management Board Prep Physiatry Board Prep Ultrasound Guided Regional Anesthesia and Pain Medicine NYC- July 19, 2023 Ultrasound Guided Regional Anesthesia and Pain Medicine NYC- August 19th, 2023 Ultrasound Guided Regional Anesthesia and Pain Medicine- Sept 15, 2023, San Juan, PR For up to date Calendar, Click Here! References Maher, Dermot P MD, MS; Chen, Lucy MD; Mao, Jianren MD, PhD. Intravenous Ketamine Infusions for Neuropathic Pain Management: A Promising Therapy in Need of Optimization. Anesthesia & Analgesia 124(2):p 661-674, February 2017. | DOI: 10.1213/ANE.0000000000001787
Help me to continue to make and share great Biblical content everyday.https://thebibleproject.buzzsprout.comBeware of Pharisees.Then Jesus spoke to the multitudes and to His disciples, saying: “The scribes and the Pharisees sit in Moses' seat. Therefore, whatever they tell you to observe, people observe and do, but do not do according to their works; for they say, and do not do. For they bind heavy burdens, hard to bear, and lay them on men's shoulders; but they themselves will not move them with one of their fingers. But all their works they do is to be seen by men. They make their Phy.lac.ter.y (A small leather box) wide and enlarge the borders of their garments. They love to sit in the best places at feasts, and the best seats in the synagogues,the love loud greetings in the marketplaces, and to be called by men, ‘Rabbi, Rabbi.' But you, do not be called ‘Rabbi'; for One is your Teacher, the Christ, and you are all brethren. Do not call anyone on earth your father; for One is your Father, He who is in heaven. And do not be called teachers; for One is your Teacher, the Christ. But he who is greatest among you shall be your servant. And whoever exalts himself will be humbled, and he who humbles himself will be exalted. (Matthew 23: 1-12)Help us continue making great content for listeners everywhere.https://thebibleproject.buzzsprout.comMy Amazon Author Pageamazon.com/author/jeremymccandlessJeremy McCandless is creating podcasts and devotional resources | PatreonThe LIFE Podcast - The Bible Project | Facebooklinkedin.com/in/jeremy-mccandless-68353b16
SPECIAL GUEST: FAITH AMEER Self-rejection can stem from so many different things! Though painful to process, it is important that we take the time to explore what is causing us to reject ourselves. Maybe someone rejected us and we subconsciously have that leading all of our decisions. Today, I hope everyone who listens to this podcast episode begins to tear down walls of the painful past so healing can come through. Ask yourself questions such as "why am I chasing something that wasn't benefitting me in the first place?", "why do I feel like I have to accept this?" or "what is lacking in my life that is making me feel like I need this when it is clearly a toxic situation?". These questions are a great start to self-reflection. “The path that I took I see, it was a part of my destiny” Song Featured: Destiny/I Am a Rock by PHYLISHA •streaming on all platforms including the following: Apple Users: http://itunes.apple.com/album/id/1626612051 Spotify Users: https://open.spotify.com/album/7D2E08f9QQhnl53cPed9Ld?si=AQWC4n1fSMGFbvNrFefrxQ Tidal: https://tidal.com/clients/web-device-redirect?path=%2Ftrack%2F231265819 _____________________________________ [PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development, and spiritual growth.
In this short episode, Phy discusses how to customize bible study to your unique learning style and find ways to make the Word a priority for your personality. More on this topic can be found in Ep. 93: How to Succeed at a Bible Reading Plan. Learning Style Quiz Bible in a Year Highlighters URLS: Learning Style Quiz: http://www.educationplanner.org/students/self-assessments/learning-styles-quiz.shtml Bible in a Year Highlighters: https://phyliciamasonheimer.com/product/5-bright-gel-bible-highlighters/
It's Black History Month and PHY's PODcast is back with Season 2! We are a people who achieve everything that we stay focused on and put our creative minds to. Before we were even born, God already had a plan for every person's life. He has also made the room for us in all areas in which He called us to be in. Fearlessly walk the path that He's laid before you. “The path that I took I see it was a part of my destiny” Song Featured: Destiny/I Am a Rock by PHYLISHA •streaming on all platforms including the following: Apple Users: http://itunes.apple.com/album/id/1626612051 Spotify Users: https://open.spotify.com/album/7D2E08f9QQhnl53cPed9Ld?si=AQWC4n1fSMGFbvNrFefrxQ Tidal: https://tidal.com/clients/web-device-redirect?path=%2Ftrack%2F231265819 _____________________________________ [PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development, and spiritual growth.
In this episode we look at Scripture's definition of repentance and Berkhof's threefold approach of intellect, emotions and will in the repentance process. Phy also answers five questions about repentance: how to know your repentance is genuine what to do if you repent, then sin again in the same way whether we need to repent of ignorant sins and more. Mentioned in this episode: Milton Vincent's The Gospel Primer Berkhof's Systematic Theology
Who doesn't love a good book recommendation?? In this episode Phy shares the highlights of her book list from 2022, including theology, nonfiction and fiction reads. Theological Books If You Will Ask St. Thomas Aquinas Warefare Prayer The Lord's Work in the Lord's Way / No Little People Faith in the Wilderness Other Non-Fiction Teach from Rest The Temperament God Gave Your Kids WayMaker The Brave Learner A Needle in the Hand of God Fiction Pride and Predjudice The Secret of the Old Clock Firekeeper's Daughter Chief Inspector Gamache Book Series An Old Fashioned Girl The Great Alone The Four Winds The Lion the Witch and the Wardrobe
How do we know if what we're singing to God is truthful and good? In this episode, Phy explains the regulative v. normative principle of worship and breaks down four popular hymns/worship songs, comparing them to Scripture.
Every fall the posts and hot takes are flying: Should Christians celebrate Halloween? In this episode, Phy breaks down the history of Samhain and All Hallowed's Eve, then discusses how to discern the celebration of this holiday and the options of imitation, abstention, or redemption. She also discusses magic in books and movies.
Who doesn't love a book recommendation? A new monthly episode of book recs is coming to Verity and we're starting with children's faith based books and bibles! Phy shares her top seven books for discipling her own kids, ages 2, 5, and 7. --Books-- The Jesus Storybook Bible The Spirit of God Illustrated Bible The Biggest Story Seek and Find: Old Testament Bible Stories Bible Infographics for Kids Read It, See It, Say It, Sing It Any Time, Any Place, Any Prayer
As promised last week, this episode wraps up our two-part conversation on discipleship. (Go back and listen to Part 1 if you missed it!) In this convo, Somer and Michelle dive more into the how of discipleship, sticking with a Biblical foundation but also giving you as much freedom in your obedience as possible. Here are a few of the resources we mentioned: Follow @phyliciamasonheimer on Instagram - we love ya, Phy! Following Jesus in Threes by Soo-Inn Tan BIG changes coming to the podcast beginning next week with new ways for you to join in the conversation - can't wait to tell you more! If you aren't already getting our emails and want to know about the changes first, add yourself to our mailing list here.
Sexual addiction is more common than many of us realize. And in the wake of purity culture, many Christians don't know how to think well about sex and sexual sin. In this episode, Phy breaks down why sexuality deserves honor and the hope we have when we've fallen short.
Ben Coffin is a Solutions Marketing Manager for PHY and ORAN emulators at Keysight Technologies. Having spent the last decade in the test and measurement industry, Ben has spent his time in business development, product management, and systems engineering roles across the wireless communications space. Ben is also enthusiastic about telling stories about how the technology in the wireless world is advancing and how the bleeding edge finds root through research and industry and academic collaboration. In a nutshell, he's a perfect fit for the Tech Talks Daily podcast. Is it too early to be talking about 6G? We discuss 5G, 6G research, Open RAN, academic research on communications, research, and policy initiatives. We also explore how wireless communications will evolve how we communicate and the type of partnerships needed to exist (private/public, government) to make research in the wireless space successful.
Dr. Ellen Beckjord, the Vice President of Population Health and Clinical Optimization at UPMC Health Plan and host of the Health Plan's podcast, Good Health, Better World, speaks with Chris Meek about a myriad of issues within the mental health space on this installment of Next Steps Forward. Dr. Beckjord addresses the common notion that mental health issues can be addressed quickly, a belief often promoted by the relative conveniences that many individuals experience in their daily lives. In addition to a strong focus on access to mental health in terms of policy and parity, she will highlight emerging bright spots in mental health treatment, how mental health challenges affect different age groups, the social determinants of healthcare and behavioral science and the often overlooked silver linings from the COVID-19 pandemic. A champion for a culture that recognizes that good mental health is just as important as good physical health, Dr. Beckjord will leave the audience with a renewed vigor to focus on and address our population's overall mental health.
Dr. Ellen Beckjord, the Vice President of Population Health and Clinical Optimization at UPMC Health Plan and host of the Health Plan's podcast, Good Health, Better World, speaks with Chris Meek about a myriad of issues within the mental health space on this installment of Next Steps Forward. Dr. Beckjord addresses the common notion that mental health issues can be addressed quickly, a belief often promoted by the relative conveniences that many individuals experience in their daily lives. In addition to a strong focus on access to mental health in terms of policy and parity, she will highlight emerging bright spots in mental health treatment, how mental health challenges affect different age groups, the social determinants of healthcare and behavioral science and the often overlooked silver linings from the COVID-19 pandemic. A champion for a culture that recognizes that good mental health is just as important as good physical health, Dr. Beckjord will leave the audience with a renewed vigor to focus on and address our population's overall mental health.
Talkin' Fanfic Episode No. - 302 Episode Title - Interview with phyn, aka RedHead Summary: Yet another incredible treat for the ColdFlash fans! On this episode of Talkin' Fanfic, Sara sits down with phynali, better known in ColdFlash circles as (the one, the only!) RedHead. RedHead has authored two of the top five kudos'd works under the ColdFlash tag on Ao3, and their work is commonly counted as some of the most seminal in the fandom. Sara and phyn dig into three of RedHead's works: “Tumbling Together”, “An All Too Jagged Snowflake”, and “Got A Melancholic Temperament (that's what they said to me)”. Sara and phyn talk about the joys and collaborative nature of the ColdFlash community, (“Hi, Aunt Crimson!”), the challenge of balancing a PhD thesis with fanfiction obligations, how juggling multiple (and very different) stories can actually help overcome writer's block, and also touch on phyn's work under the pseud “dyed_red”, where they publish Supernatural works of a very different (darker, stylistic, minimalistic) nature and atmosphere, and the challenge therein of dipping a toe into a new fandom. Bulletin Items “LEGENDS: A 2022 Clex Zine launched”. Available at https://legendszine.com/ digitally and as a PDF “Talkville” a Smallville recap podcast, coming July 13th. At least one episode will feature Zach from “Always Hold Onto Smallville” Episode References "Tumbling Together" by RedHead (Barry/Len, neighbors AU) "An All Too Jagged Snowflake" by Redhead (Barry/Len, Soulmates AU) "Got A Melancholic Temperament (that's what they said to me)" by RedHead (Barry/Len, deaged!Len) Collab - “Len & Barry Ship It” collaborative group collection (ColdFlash) An old fave: "Getting the Hang of Thursdays" by Hayseed (Snape/Hermione timeloop) A dyed_red fic: "Squint into the Sunset | Glare into the Gloaming" by dyed_red (Wincest) Phyn rec, author - nigeltde (Wincest) Phyn rec - "A man with his insides out and his outsides off" by britomart_is (Wincest, time travel) Phyn rec - "Wire Inside Me" by merle_p (Wincest, mpreg) Phyn rec - elsi's Bookmarks (elsi was interviewed in Talkin' Fanfic: Episode 209) Phyn rec, author - bealeciphers (ColdFlash) Phyn rec - "Rogue Z" by bealeciphers (ColdFlash, Zombie AU) Phyn rec, author - Crimson1 (ColdFlash, interviewed in Talkin' Fanfic: Episode 301) Phyn rec, author - DoreyG (ColdFlash, some of the earliest ColdFlash stories) Phy rec - “Lesser Evils” by tipsybluetips (the first ColdFlash story on Ao3! comics!ColdFlash) Phyn rec - “Things I Cannot Control and Do Not Admire” by n_a_feathers (ColdFlash, part of the “Things” series) Phyn rec - “A Week on Rogue's Mountain” by Chichirinoda (ColdFlash auction AU. phyn “loves this fic”
Ryan, Chris, and James discuss the third round of the playoffs including their predictions of which team continues on to the Stanley Cup Final, which RFAs and UFAs get re-signed by the Stars this offseason, and in the Who Cares? segment, they each discuss the top 3 hand soaps.If you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA).21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
Ryan, Chris, and James discuss the third round of the playoffs including their predictions of which team continues on to the Stanley Cup Final, which RFAs and UFAs get re-signed by the Stars this offseason, and in the Who Cares? segment, they each discuss the top 3 hand soaps. If you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA). 21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
We kick off the Summer edition of the Grit and Bear It Podcast I return from my break as a married man Bears signings The Stockton Heat moving And my thoughts on the Calder Cup Playoffs so far If you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA). 21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
We kick off the Summer edition of the Grit and Bear It Podcast I return from my break as a married man Bears signings The Stockton Heat moving And my thoughts on the Calder Cup Playoffs so far If you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA). 21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details. Learn more about your ad choices. Visit megaphone.fm/adchoices
Topics Were Series with Colorado, Jacob Ingham injury, Season Awards and who is going to take the next step. Not to mention a serious chicken wing discussion.#GoKingsGo #OntarioReign #LAKings #Reign #OntarioReign #Kings #GKG #Ingham #JacobIngham #LosAngeles #DraftKings #THPNIf you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA).21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
Topics Were Series with Colorado, Jacob Ingham injury, Season Awards and who is going to take the next step. Not to mention a serious chicken wing discussion. #GoKingsGo #OntarioReign #LAKings #Reign #OntarioReign #Kings #GKG #Ingham #JacobIngham #LosAngeles #DraftKings #THPN If you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA). 21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
In this week's Hockey Royalty Podcast, the guys are back to break down Game 5 between the Kings and Oilers, Darnell Nurse's suspension, and looking ahead to Thursday's critical Game 6 matchup.#GoKingsGo #LAKings #NHL #Hockey #PlayoffHockey #LosAngeles #Kings #LosAngelesKingsIf you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA).21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
Randon sits down with the new goalie for the Reign, David Hrenak. Talks about coming over to the US, creating a home in St. Cloud, how he is preparing for the playoff run, and how his idol growing up was a past LA King. Like and Subscribe!#LAKings #Kings #StCloudState #goalie #Reign #OntarioReign #Slovakia #DavidHrenak #Hrenak #THPN #DraftKingsIf you or someone you know has a gambling problem, crisis counseling and referral services can be accessed by calling 1-800-GAMBLER (1-800-426-2537) (IL/IN/MI/NJ/PA/WV/WY), 1-800-NEXT STEP (AZ), 1-800-522-4700 (CO/NH), 888-789-7777/visit http://ccpg.org/chat (CT), 1-800-BETS OFF (IA), 1-877-770-STOP (7867) (LA), 877-8-HOPENY/text HOPENY (467369) (NY), visit OPGR.org (OR), call/text TN REDLINE 1-800-889-9789 (TN), or 1-888-532-3500 (VA).21+ (18+ WY). Phy sically present in AZ/CO/CT/IL/IN/IA/LA/MI/ /NJ/NY/ PA/TN/VA/WV/WY only. Min. $5 deposit required. Eligibility restrictions apply. See http://draftkings.com/sportsbook for details.
As we roll up on 100 episodes, I want to revisit one particular area that constantly rears its ugly head: ongoing change and the need to respond to stress. For context, the world is crazy, definitely stressful, outrageously distracting – and it's pulling away from the focus you need on getting to where you need to be. Russia/Ukraine, pandemic, great resignation – the list goes on. I feel it, you feel it – it's like a broken record – there is always “something.” That said, we also know that the only way to deal with macro change is to focus on micro you. (Provided there aren't tanks outside your house … I get that). Two years ago I did a podcast that organized my thoughts on wellness – and I did it because I was frustrated with the product and messaging that was thrown around in the corporate world: judgmental theory on what you were “supposed” to do, without context, the why or anything tangible on how to get started. I revisit it today, because in two years I, myself, have changed, and while the framework remains the same, my experiences in chatting with other people about it and how I respond has shifted enormously. None of us are the same person we were two years ago. Lots to discuss, from physical to mental to social … but it's an ongoing dialogue that needs to be had for you. Long story, short: wellness is bespoke to you, and requires full accountability and responsibility on your part. Nobody can make you well but yourself. Enjoy the show – I'm here to chat about it … and look forward to seeing you soon! Your Title Goes Here Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings. Click Here for an Unedited Transcript of the Podcast Speaker 1 (00:00):3, 2, 1. Welcome to bellwether. Thank you for joining this week, episode 98, we're almost at the, the triple digits number 98, and we're gonna keep going and, and do many more. But as we roll up close to the hundredth episode, that I'm, we'll try to do something special. We'll see what comes out. I wanted to revisit something in particular, because I think it's relevant today. And you hear me talk about it all the time. I guess this is almost like my shtick. Now my thing is about the never ending change and ongoing stress that comes with change. And, you know, we're, we're in the middle of a very dynamic time, which might be an understatement with lots of, you know, pandemic challenges, Russia, Ukraine, um, workplace is changing great resignation and everything. And, and, and the con with context today, the context between why I wanted to revisit wellness, which is what we're talking about today is the world is crazy, immensely, stressful to the point where it's ridiculously distracting. Speaker 1 (01:09):Um, and it's all pulling you away from what's important today and where you want to be. And this is relevant. You know, this is obviously relevant to myself, you know, I'm going through it, I'm seeing it. And the people I'm interacting with people aren't as responsive as they usually are. Um, and, and I would say it's because we're, you know, we're tuned in, we're tuned into all of this crazy stuff going on in the world. And, and how is that relevant to what we need to do today? The spiral of this is distracting me. I can't get this done tomorrow that wasn't done and now the other thing's not getting done. And it, it snowballs into bigger things, all types of worry. We've got families to take care of. And all of this is, is a challenge to our in particular wellness. And if you, if you look at my previous time, I've talked about wellness. Speaker 1 (01:54):I've talked about wellness a lot. If you listen to the episode for two years ago, it's a big topic. And the reason I wanted to tackle wellness and I, I called it wellness organized. I did this two years ago. Um, I was frustrated with, I guess, I'll call it the wellness product and the messaging that was out there. I didn't really appreciate, uh, the way that it was framed, especially in the corporate world, I'd come out of the corporate world and, you know, doing the coaching thing and everyone telling you just do yoga and meditation. And it was very, very judgemental. I thought the wellness product was very judgemental in the way that you're, it was very disingenuous. It was telling you all the things that you were supposed to be, and it was never telling you how it was never telling you how to do this stuff. Speaker 1 (02:39):And, uh, and that was the big fall down. And so I, I needed to come up with some kind of framework I needed to organize what I actually thought about wellness. What did I actually need in the moment? Where did I need to focus? And what did wellness actually mean for me? So wellness is a massive topic, right? You've got mental wellness, you've got physical wellness, we've got all these things. My thought process on this led me to down a rabbit hole, right? I wrote the book, uh, I did the podcast, lots of coaching engagements and, and everything came out of it. And so I wanna revisit it because it's not a onetime solution. Wellness is not a onetime solution. Wellness is this ongoing challenge. It's an ongoing struggle. It's an ongoing focus, whatever, but it changes because we change. And so I, I mean, I didn't even listen to it, what I talked about before. Speaker 1 (03:27):I kind of know what I talked about in terms of what wellness means and, and how to structure it. And it's my three categories. And, uh, you've got your, your physical and your mental and your, your social, it spiritual it's social, spiritual falls under mental. I'll talk about that in a minute. Um, but, but now it's time to really make it real for people because, uh, being a hundred episodes in, we're talking about things like, as I reflect and I'll reflect more on the a hundred episode, but things like imposter syndrome, things like communicating to the C-suite things like interpersonal dynamic and relationships and everything else, I'm gonna kick off this next century of episodes with experts that are gonna tell you how I've got a nutritionist coming on the show. I've got, uh, this team that have made this incredible app, uh, that I don't wanna give away the title of it yet, because they're gonna be coming on. Speaker 1 (04:14):And I want them to talk about it, but it's really on a, a mental health and accountability aspect, uh, which doesn't doesn't even really do it justice. It's just, it's it's, I I'm really Ja really jazzed about it. I've got, uh, too sweet, the, the, the professional boxers coming back to talk about resilience. He's coming back from, from a, from an injury and, and how he's going to rebuild and, uh, all of that good stuff. So wellness, what, what it is is it, it really evolves, okay, what wellness than what you need in the moment continually evolves. And, and my wellness today is very different than what it was two years ago. And right now, I mean, I'm in a very different place. What I need is very different right now, I would argue my biggest focus is on social wellness. Two years in the pandemic, haven't really been interacting. Speaker 1 (05:01):The, these levers go up and now the past two years has been purely focused on mental, my mental wellness. What's my, what's my, you know, self-love aspect. What's my belief system. What do I really believe? What does self-care actually mean? And, and what I do with my clients, and what I wanna do on this podcast hopefully is I hope I get people to think. And, and, you know, I, I keep calling it wellness. I wish I could come up with another word. I don't know what it is. It's, it's, you know, when we hear wellness, we think of yoga and drinking, shakes, eating healthy bowls and kale salads and meditating, and yada, yada yada. And that may be right. That's, that's not where the focus should lie. I would argue it's some of these are come components and how people respond to their challenge of finding themselves to be well, which is great. Speaker 1 (05:47):Good for them. It's the needs that they have. Um, yes, physical wellness is important, but it's one of the three yoga may not be your solution for that. Uh, tofu in terms of diet is, is never a solution I can ever, ever, ever. Um, and we have to go beyond what other people are doing in order to find wellness for ourselves. Just why we have fad diets, why we have, you know, all of these things. It's, it's, um, we have to fill out what it is for ourselves and what I know people who eat once a day. I know other people who eat small meals throughout the day. I know other people who do intermittent fasting. I know other people will say, absolutely not inter intermittent fasting doesn't work. And that's, that's crazy. Um, and, and so as a refresh, I wanna talk today about, about wellness. Speaker 1 (06:36):And I wanna talk about what you're thinking about and how to think about wellness and how to almost, you know, whether it's meditation, if you wanna call that, you can call it whatever you want. Uh, but I want you to, to think through a dialogue on, on what do you need in the moment and, and how to do it, the, the three categories, and I will stand by this spiritual is not one of them. It's physical meant and social. Those are your three categories that it's almost like this, you know, the cup game with the little marble underneath, and you're moving things around. And those are the three, right? It's a trick game where you're saying this lever goes up, this one comes down. Where's the, where's the shiny ball for today. Those are the three. And I, I specifically leave out spiritual because that's spirituality is a mental exercise. Speaker 1 (07:22):So it falls under mental. Okay. So I don't, I, I think we've often ignored the social aspect to our detriment and it's, it, it actually takes work. At least for me, it takes work. So, so let's talk first. We're gonna talk about physical, which we always gloss over. We all know the answers. We always gloss over it. We're like, yep, we got it. We know what we're supposed to do. Diet fitness sleep. Um, I'm gonna eat, right. I'm gonna work out and I'm gonna get some sleep. And yet we never do it. Um, but what we need to know about it, and perhaps it's more of a, a reasoning as to why we do it. And why it's so important is that your, your physical activity, your physical wellness, what you eat impacts your ability to think impacts your, your social relationships, eating too much. Speaker 1 (08:06):Your body's metabolizing constantly. So you don't have the energy to think properly at work, which is why you never have a big sandwich or lunchtime in the office. And you're just dead in the afternoon. Um, your gut is this living organism. That's got bacteria in here that actually leads to cognitive ability. So the food you are eating is affecting your ability to think which affects the way that you think about yourself, which affects the way that you interact with other people, which affects the yada yada yada yada yada fixes, fitness and exercise drives oxygen to the brain. Um, your ability to interact with other people somewhat say, and some research shows that the way we feel about our weight and what our weight goes, impacts the people that we interact with. So your social communities are impacting how much you eat. The people we surround ourselves with. Speaker 1 (08:50):You know, they they've linked obesity to our social networks. If we're surrounded fat people, we're more likely to be fat ourselves, which is crazy. You're surrounded by smokers. You're more likely to smoke yourself. And, and so the social impact, the physical impact, the mental impact, a lot of this, they all interact in this ven diagram of, of, of crazy thought. So from there, I'm not gonna tell I've got a nutritionist coming on. She's gonna tell you all about the stuff you're is to eat and everything else. And, and, but the thing is, we know, we know we're not supposed to eat cupcakes every day for dinner. We know pizza and hamburgers and everything is an extreme rarity that we should be eating. And shouldn't be, you know, a once a week thing Friday night pizza night is not a good idea. I would argue it's too often because we're eating garbage the rest of the week. Speaker 1 (09:33):We still do Friday night pizza night. I love pizza. What can I say? But we know we, there's an accountability aspect to wellness. We know what we're supposed to eat, what we're supposed to do. We know we're supposed to work out. We know that we're supposed to go to bed at a, at a reasonable time in order to get up early in the morning and, and tackle the day we just don't do it. And we just put up our hands. We say, okay, guess we're not gonna say fine. Okay, good. Then you can't complain when something else doesn't work out. Right. And, and I think this is the, the, the crux of what I, I think I really wanna get to with the wellness thing is we're accountable for our wellness. Okay? Nobody else, nobody else. There's no fad diet. That's going to, to tell you, uh, what is supposed to work for you and everything. Speaker 1 (10:18):It's your responsibility to fully, you wanna do intermit intermittent, fasting, fine. You can do it. There is a right and a very wrong way to do it. You wanna do a paleo diet, great. There's a right and a very wrong way to do it. You want to do Mediterranean diet great. There's a right and a very wrong way to do it. And there is no shortcut. And so what we wanna do is we don't wanna take the time to educate ourselves on what the proper thing to eat is what we need in the moment and what our body needs, because that's too much work. Who's got time for that. So we take something off the shelf and we drink a shake, and then that's supposed to be it. And it doesn't work. And then we complain that we're so tired. We complain that we're not getting sleep. Speaker 1 (10:54):And we complain that we're not, we can't get motivated to go to the gym and yada, yada, yada, uh, it's our own fault. Can't blame anybody else, it's you, it's your problem. It's you, it's your fault. So let's talk, let's talk mental, um, from there, from the physical aspect, okay. We've got, we've got your, your physical, we're gonna eat, right? We're gonna, we're gonna sleep well. We're gonna work out every day. Uh, we then go to mental and the mental, the three categories of mental similarly, uh, there are three are self-love self-care and belief system, and it sounds really soft, but ignore it at your peril. And this is the one that I've been dealing with a lot. You know, I, I talked about my dark place in the middle of the pandemic. There was an episode on that. I, I talked out, uh, my challenges with, with drinking and, and struggling and, and all of that. Speaker 1 (11:48):Um, we could talk about mental toughness, you know, in the context of today. And resilience is a big word and mental toughness, everything, all of that comes from understanding yourself. Okay. Oh, I just gotta be tough. I gotta rally. I gotta do all of this stuff. Mental wellness. Everyone's got judgment on what you're supposed to do, right. Oh, just love yourself. And everything will be fine. Okay. It doesn't make sense. Right? What does that mean? Doesn't mean anything, whatever you do has to be bespoke to you. And so when we start with self love, cuz I guess this is, you know, this is probably fundamental to, to, to mental wellness and you probably have to start with loving yourself before you can take care of yourself before you can really understand what you really believe. The we're so quote that I always share is how can anybody be satisfied in life? Speaker 1 (12:34):If they are not satisfied with the one person they can never be separated from, and it's this philosophical thing, but, but we have to embrace our imperfections. Nobody's perfect. Nobody, nobody has their together. No matter how much they tell it, they could be sitting in, you know, oh, I live on this island and everything's great. And I, I have an Instagram channel and I'm an influencer and yada it's for of right behind the scenes. They're miserable human beings, trying to present a lifestyle that they don't actually have. And, and we live in this Instagram reality, we're impacted by everything else. And um, we recognize that no matter what we're dealing with, everybody else is dealing with something similar, similar, but different. If that makes sense. Um, and we get knocked off, we get rattled all the time. Everybody gets rattled, we get knocked off our horse and sometimes we're back up in a day. Speaker 1 (13:26):Sometimes it takes a month or two. Um, but what gets us back is figuring out our value for lack of a better term, right? We have to start this dialogue. It comes from an in inner inner dialogue. It's this ongoing process of what value am I bringing? And how do I actually realize and recognize that value and believe that value. I've got these people from the forgiving app. I said it forgi they're coming on the show, um, about how forgiveness is, is, is a mental exercise. Uh, we have to believe it ourselves in order for us to get back and recognize our value and all of these things and find what we could control and do all of this work. We're gonna talk more to them about it, but that's ultimately what self-love is, is embracing and perfection. It's not physical, right? Sometimes we have judgmental stuff, embrace it. Speaker 1 (14:19):Okay. Do you have judgments fine. If you don't want them, then address them and change it and embrace the fact that maybe you're doing something you don't want to do. That's bravery. That's where confidence comes from admitting that you have flaws that is fundamental to, to, to becoming courageous is recognizing it, boxing it, address, seeing it and moving forward. And we're always going to have it. Patience is not one of my finest virtues. And I know that and I have, I do a lot of apologizing and I'm trying to fix it. I'm trying to help it. But you know what? I also love the fact that I'm impatient because I get done. And so there are good and negative things to this. So self love, or I'll explore more about that with the, with the forgive people. But, um, that's something to, you know, think about what, what imperfections do you have that you can embrace that aren't just physical. Speaker 1 (15:14):We all have physical ones too, right? We all wanna look better with, without a t-shirt at the beach. Um, but you know what, who cares? Who cares? So from that, we go to self-care, uh, what we need in the moment. And that's sometimes it's a nap. Uh, sometimes it's less social media. Sometimes we just need to cut out a work and read a book I'm guilty of that. Uh, but part of this dialogue, when we figure out self-love and understanding who we are is figuring out what we need in the moment it's listening, it's questioning, it's understanding where our insecurities come from, where our stress comes from. Um, we have to put a name on it. There's an exercise for, for this is identify the insecurity, given a name so you can address sit, and you could do all of these things works for some people and may not work for others, but what's causing your insecurity. Speaker 1 (15:58):The imposter syndrome, um, is a significant one. And, and a lot of this comes from, uh, addressing the mental wellness of self love, self care and belief system and security with yourself. Belief system is the biggest one for me. And this it's my favorite one because it's so big and it's, and this is where spiritual falls into. And the fundamental question of what do you believe? And this is, this will impact your ability to take good care of yourself. This will impact your ability to love yourself, much of our insecurity and challenges come from the fact that we don't know what we believe. We're defending something we say, when we don't fully understand what it is that we're saying, and we don't take the time to do that. And when we don't know what we believe, we bounce just from one idea to the next. Speaker 1 (16:43):Uh, and there's just, we're not grounded. There's no grounding and bouncing from one idea to the next. And, and previously, when I chatted about belief system, it was more on the idea, uh, um, that it allowed us to have good discussions in a non defensive manner, right? Things like politics, religion. I think I came out with it. Maybe it was a presidential election year. I don't know, but in, or obviously it's, it's still relevant here, right? How can you have a belief system if you don't fully understand the other side, right. You wanna say, I believe in God. Well then you have to understand how someone could not believe in God or vice versa. Right. And that's fine. Right. And all of a sudden, when you understand both sides, you say, okay, I understand why someone would do that because I've fully thought it through it, but I don't actually, I haven't bought in. Speaker 1 (17:26):Right. I don't buy that fine, man. That's cool. That's at a certain point, you know, we've got science that brings up to a certain point and then we've got everything else up to another one. Um, doesn't allow you to ignore facts and reasoning. And, and so that's one, but today I, I would move beyond the ability to have deeper discussions. And I would say having a belief system gives you grounding, right? It gives you, it, it's a mental health, self care and self-love exercise. That's what a belief system is. Uh, it grounds us into something to hold onto when the world is going crazy. And when we fully understand what we believe in that moment and understand that a belief system will constantly change. It's dynamic, the hope point of learning. We talk about belief and learning and all of this stuff that, that we're supposed to do. Speaker 1 (18:15):That means your belief. System's going to change as you get more factual knowledge and you learn, then all of a sudden you say, all right, what can I deduce based on all of the information that I have. And do I really believe that? And we're not aligning to a group of people and following whoever, just tell you to do it. We're actually able to say, you know what? This is, you know, I do believe that this is a, a, a viable solution to whatever it is that we're doing. Um, the ments, the, the, the, the anger and the arguments and the, the frustrations that we see everywhere are driven. I would argue you by misguided information and insecurity, and both of those are solved when you figure out your belief system and fully understand it. All right. If you're gonna have a belief system, you get all the facts, understand the facts, understand what facts are, and then you address what, what shortcomings you have or, or what insecurities you have with it. Speaker 1 (19:13):Um, that's it, I mean, that's, that's mental health one on one, and it takes a ton. You probably need help with that. Right? You might need a, a therapist. You certainly need a friend, uh, and, and find someone curious like you who can have these types of discussions where there's no, almost like in a, this weird Socrates way where you just kind of have the discussion, not to come to, to a solution, but to hash out what it is. You know, why would I believe that versus this? Oh, that's interesting. Okay. Let me go back and think about it. Okay, cool. Um, and that's what I do on my runs. I do a lot of that kind of full, I have full Lincoln Douglas debates in my head while I run, because I'm, you know, I'm insane. And that, that's just what I do. Uh, so that's physical and mental. Speaker 1 (19:59):Okay. And I'm going long, but, but this is, you know, I, it's just important. Okay. Just from the people I'm talking to and what people are struggling with, um, it's relevant and it's, it's, it's very important. So the third aspect is social, and this is my most important right now, my mental is ongoing. My physical is ongoing. Right, right. I, I made focused attempts at that, uh, and continued to do it. Um, but social is, and, and I'm realizing how important social is now. You know, we talk about it, but coming outta the pandemic, I'm starting to see a lot more people I'm able to interact more. The, uh, we just did that, run up a Martha's vineyard. And it was like a real shock to my system of how much fun we had. And, and so the, the social aspect has three categories as well. Speaker 1 (20:46):You need a support system, micro interactions and new people. You have to meet newbies, you need new perspectives. And so when I think about social, the support systems, who are your people, and if you think about, if you had to name five people, and I've talked about your personal board of directors, and, and we could do that, sometimes it's family, sometimes it's, uh, friends and, and whatever. And sometimes it's not, maybe it's, you know, some other category completely, maybe it's people on a team or whatever. Um, the, the, this helps mental as well, right? From a motivation perspective, from a venting perspective, to an accomplishment perspective, we talked before about, you know, our diets and while we're eating, we we're, we're reflective of the people around us and we're reflective of our support system. And it's fundamental to our overall wellness, from a mental perspective, we're able to vent, we're able to be motivated. Speaker 1 (21:42):We're able to talk about our challenges. We're able to articulate our belief system and question our belief system in a safe place. We all, uh, you know, we find these people have absolutely zero judgment. It's a requirement to find a good support system. And that's difficult for some people. And whether you're introverted or not, it's introversion is irrelevant. Right? We talk about introversion all the time. We get my energy from being alone and everything else, that's fine. Right? I'm introverted. I get it. Sometimes I need to turn the world off. But at the same time, that doesn't mean you can't have social interaction some of the time. And, and when we figure out who our support system is, and, and who we're socially surrounding ourselves with and who we're, where we're incorporating into that. Uh, and, and we get energy, we may get from being alone, but you also get energy from the people around you. Speaker 1 (22:36):And that's a lever. It's a percentage thing. It's a balanced thing. Some people like more of one versus the other, but you need both. You absolutely need both. You can be the most extroverted person in the world. Love a big crowd. Sometimes though you have to be by yourself. You have to answer the questions in your head. And a lot of extroverted people had real trouble with that. During the pandemic introvert, you have to get out, right. You can't just live in your head. It's a challenging place to be. It's a long place to be, but we need those people who we can implicitly trust. We need those people who can give us good counsel. Uh, we need to be that person, you know, for other people as well, right? It goes two ways. And, and we get mental benefit and value out of that. Speaker 1 (23:16):So you are part of a support system for somebody else as well. Doesn't just go one way, right? We don't just take it from other people and then just leave them to their, to their own devices. It's a, it's a give and take kind of relationship economic, uh, aspect. So, so no matter how you frame your social system, you do need a support system in a place from there. The other two, uh, are less involved, but just almost like a check the box one is we need to interact micro interactions. We need to feel part of a bigger world. And it's the librarian. It's the bus driver. It's the people on the train. It's, whatever it is. Um, we need to recognize that there are other people in this world and that we're part of something bigger. And there is a perspective beyond us. Often times when things get stressful, we pull our mindset completely down into this really incredibly narrow focus, where we're unable to see the forest for the trees and micro interactions pulls us out to recognize that there is something bigger than us in this world. Speaker 1 (24:22):And that's important. It's important. It's, it's important for the perspective. It's important for the mentality of this two shall pass there's time. We've got everything. It may take us a week. It may take us a month, but there are other people in this world. The world will go on. We are on a pebble going through space. And you know what your inability to get a salad for lunch is really kind of irrelevant. Um, and most of our problems are, are extremely irrelevant. And, and this perspective on micro interactions and bigger people allows us to focus on what's important. Okay? So that's that the other one is we need new experiences and we need to meet new people, not from a micro interaction perspective, right? And this is one of the benefits going back to the office is to see people and, and do all of that. Speaker 1 (25:04):Um, one of my struggles and this I started working for moment eight years ago, was no social interaction. And, and micro interactions were a big part of it. I had to go to the library to work. I had to go work outta the coffee shop. I had to force myself to go into the city. I'm doing that again. Now we also need new perspectives. How are you meeting new people? Because when you get into this echo chamber of the same people over and over, you can lose perspective. And so we need new ideas. We need different perspectives to challenge that belief system that we already have to challenge and question things with our support system to, to just learn fundamental fricking manners, to say that somebody has a different idea than I than I do. And how do I appropriately respond to that? We need new perspectives need to, to challenge our thinking all the time. Speaker 1 (25:52):This is part of learning. I've said it always kids love life because kids are constantly learning. Adults are miserable, cuz they're supposed to have all the answers. When you go around telling everybody you have all the answers. That's why everyone's so angry. Flip that around and start asking questions from the other side, be curious and say, wow, this is amazing. Now they may be, you know, know dumb asses, right? A lot of the people are, but, um, I get it. I understand it, but we still have to find, there are so many people in this world. Uh, it's, it's so important to get these different perspectives and to get you thinking about something different. So as, as we look to continue and develop in 2022, as I look to, uh, uh, another a hundred episodes, which I'll do a real hundred hundredth episode one, um, to talk about it, but this is everything, no matter what's going on in the world. Speaker 1 (26:37):Now look, I, I get it right. A war comes to your front step. You've got some pretty impressive challenges, but I'm talking in part mostly to people in the United States, people in Ireland, those are my two biggest kind of listeners. Um, and so from the perspective of what we're fortunate enough, fortunate enough to only be dealing with there is a world to status. We have to focus on what we can do in the moment. What are the decisions we can do in the moment to make sure that we are addressing everything that we can actually control. Okay. By focusing on you, by focusing on PHY physical, mental, social, these are the things that we can actually control. When we talk about what can you control and how do you relieve stress into all of that? Everything else will fall into place, work, ache, focus on you. Speaker 1 (27:23):And what you can can do home is bed week at home, focus on you and what you can do. Maybe it's a run. Maybe it's some pushups, maybe it's, you know, changing what you're eating. Maybe it's going to bed early. What I love all about this. And I, I said this a minute ago, we lie in the bed we make, okay, your wellness is up to you. There is an accountability aspect. There's a responsibility aspect. You have to do this work. And when we're so busy focusing attention on somebody else or some other challenge or some theoretical thing it's taking away from what's best for us. And the only way we can benefit the world, the only way we can actually impact the people around us in a positive is by focusing on making ourselves the best possible person we can be. We lie in the bed we make, and sometimes someone the bed, but then you, it's up to you to clean your sheets and remake that bed. Speaker 1 (28:20):It's kind of a gross analogy, but it's true, right? It's up to us to make our bed and that's it. And that's, that's what it comes down to for wellness. So thank you for listening. I hope this is helpful. I'm going to get into really tactical stuff over the next few, few episodes. Um, because I, I, I want this to be, to be real. And I want people to, to think about this and challenge it. And you know, there are exercises you could do. If you want free exercises, send me a note. I'll send you some exercises to do, uh, give me a call and, and we can chat it through whatever it is. Uh, but it's a stressful time and it's not even an election year. Holy cow, this is gonna be like, it's gonna be ugly in about a year. It's gonna get really ugly. Um, and I, I, I mean that sincerely. So as we look ahead to the future, the time to focus on you is right now. So think about yourself, be well, uh, challenge yourself in really, really good ways. And as always, I'm here to help. So thank you for listening. I hope to talk to you soon. Bye.
Today on the podcast we had a up and coming CSGO caster Phy. Phy is a 17 yr old who has casted on a wide variety of CS stages! on the pod we cover his individual journey, what he has learned about the industry and an unexpected prom-posal to a fellow caster? It was a very fun talk and I look forward to the next time we get Phy on the pod --- Support this podcast: https://podcasters.spotify.com/pod/show/castlecomms/support
Phylicia Masonheimer is an bestselling author and podcast host teaching Christians how to know what they believe and live it boldly. She holds a degree in religion from Liberty University and lives in northern Michigan with her husband and three kids. She's also one of the most focused gals Val knows! Hear about Phylicia's prayer routine, biggest prayer obstacles and how she finds focus in prayer.Phy's website | Phy's weekly newsletter, Conlectio (one of Val's favorite newsletters she subscribes to!) | Phy on Instagram Val's new book on prayer releases October 12! Pre-order 1 copy before release day and get the audio version for free. Pre-order 4 copies and get the free no-fail leader guide to lead a group through the book and launch an actual prayer group! Go to valmariepaper.com/pray to order and get bonuses. Want a journal designed to help you focus in prayer? Try our monthly format designed to be filled out once at the beginning of the month really intentionally. Then you can pick up daily to pray over several things on your list! Check out our brand new linen journals in the shop now! https://shop.valmariepaper.com/
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••The arts have been suffering for over a year now due to the pandemic. It's refreshing to know that us artists are still passionate and embracing our creativity even the more. Tonight's conversation features singer/songwriter, actress, model (and so much more...!) Doni Nicole! She shares with us her journey through performing arts taking us through her highs, lows and even offering advice on how to stay encouraged, put a period on negativity and be the best version of YOURSELF! Check out Doni Nicole's (Tradonia Baker for previously released content & Doni Nicole for current content) music on iTunes, Spotify, iHeartRadio and other available streaming platforms. Also, follow her on social media: Instagram @DoniSoulSinger24 and Facebook under Doni Nicole. •••
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••Black Women & Alopecia We are women. We love hair. Even men love hair. But what do you do when you begin going through hair loss way before the typical age? This can be a huge challenge especially for women, and Black women, to be more specific. Because we take such pride in having healthy, full and luxurious hair, hair loss can really hit hard for Afro-Caribbeans. In this conversation with Professor Ericka Counts, we discuss her journey as a Black woman navigating through alopecia and where she is with it all today. Click/Copy & Paste the link below to view this magazine issue dedicated to beautiful Black women who have alopecia or just decided to be bald. Fashion Avenue News Magazine, Washington, DC https://www.magcloud.com/browse/issue/1880823 •••
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••Black men and women have come so far in this life! Barack and Michelle Obama made it look easy with their prestige, poise, professionalism and confidence, but being Black and being in politics was and still is a challenge. Specifically speaking on this subject, this interview with Natasha, a campaign staffer, gives us a taste of what it is like to be a Black woman working behind the scenes in a world still dominated by White men•••
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••ADHD isn't talked about in the black community and that's not normally a thing that we find ourselves even discussing since it doesn't “normally happen to us”.If black people sense that their child is hyperactive or just full of energy, our first thing isn't to go and put them on medicine. But I believe that any race or ethnic background can be hit with any condition. Black people do suffer in a lot of areas that we never talk about. There is so much to learn about ADD/ADHD and we will dig in and get some firsthand experience with Dr. Aisha Rasberry!•••
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••For the most part, we see how black men are treated in America in comparison to white men. There is a hierarchy and a sense of entitlement as well as a definite injustice overall. It isn't talked about, but as women, we are under-appreciated. We work so hard to keep things going but never get credit. During this interview, I had the chance to talk with a few of my friends about what their point of view is on womanhood. To go even more in-depth, we discuss our experiences, similarities, differences and challenges as black and white women in America.•••
[PHY's POD]cast is a show that is based on Christian principles and is geared towards encouraging people to have a stronger sense of self-worth. This podcast can also educate listeners by bringing awareness to an array of topics. [PHY's POD]cast supports overall mental health positivity, personal development and spiritual growth. •••For the most part, we see how black men are treated in America in comparison to white men. There is a hierarchy and a sense of entitlement as well as a definite injustice overall. It isn't talked about, but as women, we are under-appreciated. We work so hard to keep things going but never get credit. During this interview, I had the chance to talk with a few of my friends about what their point of view is on womanhood. To go even more in-depth, we discuss our experiences, similarities, differences and challenges as black and white women in America.•••