Podcasts about Simran

  • 505PODCASTS
  • 2,397EPISODES
  • 49mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Apr 18, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Simran

Latest podcast episodes about Simran

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Prenatal Shadow

Dreamvisions 7 Radio Network

Play Episode Listen Later Apr 18, 2025 55:09


The Prenatal Shadow: Cherionna Menzam-Sills By integrating the prenatal and perinatal shadow hidden just beyond conscious awareness, we can heal our relationships with ourselves and our loved ones as well as reconnect with our original potential.  Discover the potential and resilience of babies before, during, and after birth, as sentient beings capable of healing their own trauma and the world if their voices are heard and implicit story telling is valued, observed, and listened to. It is only through this relational, meaningful engagement between babies and caregivers and community, beginning during conception and pregnancy, that implicit memories are integrated into individual and collective consciousness rather than becoming unconscious shadow acting out in relationship with ourselves and the world. The Prenatal Shadow: Order Here Cherionna Menzam-Sills, Ph.D., is a somatic pre- and perinatal therapist, Continuum somatic inquiry teacher, and biodynamic Craniosacral therapist. with a doctorate in prenatal and perinatal psychology. Informed by extensive study with perinatal pioneers William R. Emerson and Ray Castellino, and Continuum founder, Emilie Conrad, Menzam-Sills has taught internationally, often with her husband, Biodynamics pioneer Franklyn Sills. The author of The Breath of Life and Spirit into Form, she lives in Devon UK.  BirthingYourLife.org Newsletter Sign Up Here - Stay Connected / SIMRAN's Community 11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

MenonFitness Systems
16th April 2025: Living a conscious life

MenonFitness Systems

Play Episode Listen Later Apr 17, 2025 10:09


In today's podcast I talk about: Mookambika Devi darshan. Lunch at Shetty lunch home with Gayathri, Simran, Seema. Packing for Kerala trip. Awesome Track workout.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Power of Enough

Dreamvisions 7 Radio Network

Play Episode Listen Later Apr 11, 2025 61:03


The Power of Enough: Elizabeth Husserl The Power of Enough is an invitation to explore a new possibility in your relationship with money and wealth, where money becomes a trusted mentor that pushes you to expand your horizon and wealth becomes something to embody instead of possess. This requires a radical transformation of how we see wealth, how we define money, and the ways we take responsibility for where we've gone wrong. The Power of Enough: Order Here Elizabeth Husserl is a registered investment advisor representative, financial advisor, and cofounder of Peak360 Wealth Management, a boutique wealth planning firm. She holds a BS in economics from Tulane University and an MA in East-West psychology from the California Institute of Integral Studies, where she has also taught as an adjunct professor. Newsletter Sign Up Here - Stay Connected / SIMRAN's Community 11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

The Pooja & Gurdeep Show
180 - Dating Profile Red Flags

The Pooja & Gurdeep Show

Play Episode Listen Later Apr 2, 2025 26:41


Usually the prankee, Pooja turns the tables on April fools, plus regular listener Simran texts in looking for dating advice ...AND... P&G learn something shocking about Stef.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: 'Wake Up' Call

Dreamvisions 7 Radio Network

Play Episode Listen Later Mar 20, 2025 45:34


The 'Wake Up' Call: Paul Ferrini "We don't come to peace by trying to change others. We come to peace by being peaceful in our own hearts and minds. The goal of peace and the process of peace are one and the same." - Paul Ferrini New Book on Relationship: Order Here Love Without Conditions Book: Order Here  Paul Ferrini is the author of some 50 books on love, healing and forgiveness. His unique blend of spirituality and psychology goes beyond self-help and recovery into the heart of healing. His conferences, retreats, Affinity Group Process, and Virtual Community have helped thousands of people deepen their practice of forgiveness and open their hearts to the divine presence in themselves and others. www.lightforthesoul.com Newsletter Sign Up Here - Stay Connected / SIMRAN's Community  11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

The Balanced Body Podcast
Episode 45: Bioidentical Hormones, PCOS, Gut Health & Lifestyle Optimization with Guest Expert, Dr. Simran Rattan

The Balanced Body Podcast

Play Episode Listen Later Mar 17, 2025 27:49


If you've ever been told “your lab work is normal” but still feel exhausted, bloated, or like your metabolism is working against you—this episode is for you.Today, I'm joined by Dr. Simran Rattan, a triple-board-certified expert in Family Medicine, Integrative Medicine, and Health & Wellness Coaching. She's the founder of Kartar Health and a Clinical Mentor at the Andrew Weil Center for Integrative Medicine.We dive deep into: ✅ Why normal blood work doesn't always mean optimal health—and how functional lab ranges give a clearer picture ✅ Signs your hormones are imbalanced (that often get missed!) ✅ PCOS beyond fertility—what it reveals about insulin resistance and why so many women don't get properly screened ✅ Why gut health, stress, and digestion impact your hormones more than you think ✅ The importance of fiber & why an all-carnivore diet isn't ideal ✅ How calorie counting is more about awareness of what you're NOT eating (and why deficits can backfire) ✅ How to create a life filled with meaning & joy for true well-beingDr. Rattan and I also discuss why hormone therapy alone isn't a magic fix—you need to address sleep, stress, diet, digestion & gut health for real results. Plus, we break down how to take proactive steps NOW to prevent chronic disease and optimize your health as you age.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Gabriel Meyer

Dreamvisions 7 Radio Network

Play Episode Listen Later Mar 14, 2025 48:02


On the Verge: Gabriel Meyer Are you open to letting go of what you have known? To know what others know… to feel, see and hear the medicine that we each carry. Gabriel's nomadic adventures across more than 20 countries with his reflections on faith, culture, and humanity. The book culminates in a visionary reimagining of Hebron and Jerusalem as sanctuaries for peace and wholeness, where diverse faiths and traditions come together to celebrate and heal. Gabriel delves into themes like sacred activism, collective trauma, interfaith dialogue, and the spiritual dimensions of justice and creativity. It's a deeply personal and universal narrative offering profound insights for readers of all backgrounds. On the Verge of the Verb is like everything that Gabriel creates… it is an intersection of art, and inspired action … a prayerful supplication and the communion. Order the Book Here Gabriel's life journey lends profound depth to his work. Born in Argentina into a family of human rights champions, his  creative portfolio includes four music albums, two books of poetry, and now his first prose book, On the Verge of the Verb. gabrielmeyerhalevy.com Newsletter Sign Up Here - Stay Connected / SIMRAN's Community  11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

What's The Hook with Diane & Andy
Chatting with SIMRAN BAIDWAN, THE PITT EP and CLEAN SLATE Showrunner

What's The Hook with Diane & Andy

Play Episode Listen Later Mar 7, 2025 29:52


Diane does a quick preview of upcoming premieres and then chats with SIMRAN BAIDWAN, an EP on Max's hit medical drama THE PITT, and a showrunner on Prime Video's comedy CLEAN SLATE. We talk about the meticulous process of making THE PITT and navigating several tough storylines.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Becoming Invincible

Dreamvisions 7 Radio Network

Play Episode Listen Later Mar 7, 2025 49:39


Becoming Invincible: Howard Falco "The invincible mind is an incredibly sacred state of being. A place where intuition flourishes and life bends to your will. In this state you have realized your power as a creator and are open to the beauty of life and where it takes you in terms of your peak potential. A new you is now emerging." Are you ready for next-level mindfulness and success? Would you like renewed focus, inspired energy, and breakthrough results. Are you ready to tap into your infinite potential? “The ultimate goal of the invincible mindset is not to think you will manifest what you want. It is to know you will manifest it. This knowledge cannot be faked. It must be a knowing beyond all think- ing. It must be a true state of being: “I AM.” — Howard Falco  Howard Falco is a modern-day spiritual teacher and mindset and mental strength coach. His new book INVINCIBLE: The Mindset of Infinite Potential and the Secret to Inevitable Success (Releasing March 25th, 2025) HowardFaco.com — TotalMindSports.com — 8Wisdom.org  Newsletter Sign Up Here - Stay Connected / SIMRAN's Community  11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

For Love and Chocolate
The One With GuruPrem and Simran Khalsa

For Love and Chocolate

Play Episode Listen Later Mar 5, 2025 40:22


Love From Above with GuruPrem and Simran Khalsa 

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Spiritual Aging

Dreamvisions 7 Radio Network

Play Episode Listen Later Feb 28, 2025 44:43


Spiritual Aging: Carol Orsborn, Ph.D. “A human being would certainly not grow to be seventy or eighty years old if this longevity had no meaning for the species.”-Carl Jung "Spiritual aging argues that the primary task of development in the second half of life is to achieve some degree of detachment from all the busyness, frenzy, and overactivity that defines success in earlier life stages to make space for new life to emerge. We regret less and appreciate more. And there are moments when we dissolve completely into a state of joy for no reason, feeling in the very marrow of our bones that all is well in the world." -Carol Orsborn, Ph.D. Dr.Carol Orsborn is the author of 36 books about life stage and spiritual development and the guiding force behind the Spiritual Aging Study and Support Group movement, small in-person and online gatherings of old souls meeting around the world. She is an internationally-recognized thought leader on the fulfillment of the human potential through all life stages. carolorsborn.com Newsletter Sign Up Here - Stay Connected / SIMRAN's Community 11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Now or Never
Craving an escape? Come travel the world with us (without leaving your couch)

Now or Never

Play Episode Listen Later Feb 27, 2025 44:41


Cairo. Nairobi. Rio de Janeiro. Those are just a few of the places we're going today, to see how Canadians are making their mark around the world right now.Simran Bajwa is determined to become the youngest Canadian to hike the seven summits, the highest mountain peaks on each continent. Something that's keeping her motivated through gruelling weather and treacherous terrains is her mom, who is living vicariously through Simran because she spent many years unable to scratch her own travel bug. Despite being born and raised in Ottawa, travel journalist Joel Balsam never felt completely himself in Canada. So he started “shopping” for a new place to live in his adult years, travelling to more than 60 countries. The place he feels most alive is Rio de Janeiro, Brazil, so he decided to stay. But Joel says there's a difference between “staying” and “settling.”When Incia Khalid travelled to Cairo, Egypt at a low point in her life, she didn't expect to find the healing she needed. Today, hear how she handles life, business, and motherhood between two continents.For the last 35 years, Larry Gelmon has lived and worked in Kenya. But his time here - as a doctor and researcher at an HIV/AIDS clinic in Nairobi - Is very much up in the air, after the U.S. put a hold on international aid.This winter, Katya Castillo left her hometown of Edmonton to spend a week in Puerto Vallarta and Mexico City. But for Katya, she isn't just another tourist – she grew up in Mexico and this is her home too. She explains what it's like for her to go there as a visitor now, and be a Mexican-Canadian among the “gringos” abroad.  

Vada Poche Tamil Podcast
EP 246: Meeting Simran, Hip Hop Tamizha & The Indian Drunkard Stereotype

Vada Poche Tamil Podcast

Play Episode Listen Later Feb 23, 2025 28:30


This episode is all about unexpected moments, fresh perspectives, and breaking stereotypes. We talk about how we got a surprise chance to interview Simran at Pradhana Vizha—something we never saw coming! We also met Hip Hop Tamizha ahead of his concert in Singapore next week and saw a completely different side of him that changed our perception.Beyond that, we dive into the common stereotype of Indians being drunkards and how banter culture plays into it. It's a mix of fun, deep conversations, and plenty of laughs. Tune in to hear it all!==========Don't forget to like, comment and subscribe to our YouTube and other social channels to never miss an update. Thank you for your support and we look forward to sharing more exciting content with you soon!

Chef AJ LIVE!
Day 16_ From Doctor to Patient_ Resilience and Finding Joy with Cancer Previvor Dr. Simran Malhotra (1)

Chef AJ LIVE!

Play Episode Listen Later Feb 22, 2025 97:56


https://www.coachsimranmd.com/ ORDER MY NEW BOOK SWEET INDULGENCE!!! https://www.amazon.com/Chef-AJs-Sweet-Indulgence-Guilt-Free/dp/1570674248 or https://www.barnesandnoble.com/w/book/1144514092?ean=9781570674242 GET MY FREE INSTANT POT COOKBOOK: https://www.chefaj.com/instant-pot-download MY BEST SELLING WEIGHT LOSS BOOK: https://www.amazon.com/dp/1570674086?tag=onamzchefajsh-20&linkCode=ssc&creativeASIN=1570674086&asc_item-id=amzn1.ideas.1GNPDCAG4A86S Disclaimer: This podcast does not provide medical advice. The content of this podcast is provided for informational or educational purposes only. It is not intended to be a substitute for informed medical advice or care. You should not use this information to diagnose or treat any health issue without consulting your doctor. Always seek medical advice before making any lifestyle changes. Simran Malhotra MD DipABLM CHWC Dr. Simran Malhotra is a triple board-certified physician in internal medicine, hospice & palliative care, and lifestyle medicine as well as a certified health and wellness coach. She was recognized as a "Top Doc" by Baltimore Magazine in Palliative Medicine for three consecutive years (2019, 2020, 2023). Dr. Malhotra is a diplomate of the American College of Lifestyle Medicine (ACLM) and has completed several certifications, including T. Colin Campbell's Plant-Based Nutrition (2019), CHEF Culinary Coaching (2020), and WellCoaches Health and Wellness Coaching (2022). With nearly a decade of experience in both inpatient and outpatient palliative care, Dr. Malhotra leverages the invaluable lessons learned from end-of-life care to advocate for the importance of lifestyle medicine and a positive mindset in enhancing overall well-being. As a mother of two and a BRCA 1 previvor, Dr. Malhotra is deeply committed to her mission. After undergoing a risk-reducing bilateral mastectomy & total hysterectomy at 32 years old due to her strong family history of cancer, she founded Wellness By LifestyleMD, a platform dedicated to educating busy parents about the transformative power of lifestyle & mindset changes on wellbeing, quality of life and longevity. In addition to her entrepreneurial pursuits, Dr. Malhotra writes a column for Everyday Health called "Awaken Your Wellness" and has been featured in various media outlets (TIME, Glamour, Yahoo, MSN, etc.), blogs and podcasts, sharing her unique insights from both her work in palliative care as well as her experiences as a patient and genetic mutation carrier passionate about using lifestyle as medicine. Website: Wellness By Lifestyle MD | By SimranMD https://www.coachsimranmd.com/ This is the viral video she reffered to where her husband sings to her: https://www.youtube.com/watch?v=0GojJnrqpeE Shop Dr.Simran's Favorite Lifestyle and Wellness Tools Here: https://www.searchbubble.com/drsimran.malhotra?fbclid=PAZXh0bgNhZW0CMTEAAaaKcnXfpgCEfg6ORr5JmAn03UOQ-64T9bNP1tXmWV9BZ5XD50C3wwfxwpg_aem_UW7k5m26dzvVWFVHGYc4bQ Awaken Your Wellness Columnist at Everyday Health https://www.everydayhealth.com/columns/awaken-your-wellness/ Co-Author of the book “How Healers Heal” https://www.amazon.com/dp/1961549018?linkCode=ssc&tag=onamzchefajsh-20&creativeASIN=1961549018&asc_item-id=amzn1.ideas.1GNPDCAG4A86S&ref_=aip_sf_list_spv_s_ofs_mixed_d_asin Instagram: Simran Malhotra

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Give Love 24/7... 365

Dreamvisions 7 Radio Network

Play Episode Listen Later Feb 21, 2025 43:02


Give Love 24/7... 365: Jacqueline Way What does it mean to give to a world that needs so much? What does it look like to love those who might behave unlovingly? How do we learn to give others, and also take care of ourtselves? 365give founder, Jacqueline Way, wanted to teach her son Nic how to be a kind, compassionate, and happy human being. That's why, on Nic's 3rd birthday, they decided to give back to the world – every day – for 365 days. Since then, Jacqueline's parenting project has grown into a global movement that has touched the lives of millions and makes a difference 365 days a year. Jacqueline Way / 365 Give Newsletter Sign Up Here - Stay Connected / SIMRAN's Community 11:11 Talk Radio... Conversations of energy, growth, truth, and wisdom that expand personal growth, empower conscious living, and raise self-awareness.  Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Your Role in Today's World

Dreamvisions 7 Radio Network

Play Episode Listen Later Feb 14, 2025 36:15


Your Role in Today's World: SIMRAN Relax into the listening and breath that this episode offers as you realign to your true role in today's world.  Each time we allow for the organic cyclical movement that is life's dance, we relax into the river of energy and spirit that knows where we are deigned to go. In this way, we open to a Divine plan… Divine will… rather than exerting personal will, and then experiencing the challenges, struggle and hardship which that interference produced. These times… they need… deserve… and require our Love, Courage and Commitment… in fact our devotion. Your Journey to Enlightenment: Twelve Guiding Principles of Love Courage and Commitment Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Ancestral Trauma

Dreamvisions 7 Radio Network

Play Episode Listen Later Feb 7, 2025 47:32


Ancestral Trauma and Family Constellations: Efu Nyaki We all tend to believe in free will. That it is WE who ulti- mately determine our destinies, that we are the captains of our own ships. Unbeknown to us, however, we may be influ- enced by events and circumstances that our ancestors (and their ancestors in turn) experienced during lifetimes long past. And yet, their impact persists outside the realm of our conscious awareness. These lingering “ghosts” exercise powerful influences on our emo- tions, reactions, behaviors, and choices. Some of these ancestral influences have had negative (even traumatic) effects on us, while others are life-supporting and life-affirming. Guest, Efu Nyaki shares her indigenous wisdom in combination with somatic experiencing and family constellations.  11:11 Talk Radio dives deeply into conversations of truth, growth and wisdom to assist individuals in expanding personal growth, conscious living, humanity, and self-awareness. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Data Brunch from ICPSR
Episode 18: Trust and Outcomes

Data Brunch from ICPSR

Play Episode Listen Later Feb 5, 2025 27:05


The year is 2022. Simran Sethi Khanna just won ICPSR's Undergraduate Paper Competition for her work, "Determining LGB Perceptions of and Trust in the Medical Establishment," and joined Data Brunch to talk about her findings, her inspiration, and her love of Thai food. The year is 2025. Simran is now an MD candidate at the Duke University School of Medicine. She still loves Thai food.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Shift: Into the Aquarian Age

Dreamvisions 7 Radio Network

Play Episode Listen Later Jan 31, 2025 45:29


The Shift: Into the Aquarian Age: SIMRAN The transition from the Piscean Age to the Aquarian Age represents a profound shift in humanity's collective consciousness, values, and societal structures. This change is not instantaneous but occurs gradually over centuries, with overlapping influences from both ages. From 0 A.D. to the present, we have been in the Age of Pisces. The Piscean Age was dominated by hierarchy, and power. You will see that The Aquarian Age is dominated by networks, and information. This show is intended to give you a framework for 'the shift' from Piscean Age to Aquarian Age and what that means for you personally, how to negotiate  these times... and a balanced fgrounded way to begin this new period.  *Some references and requoted material in this show is from author, Santokh Singh Santokh Singh Khalsa is a chiropractor in Pasadena, California. He is also the director of The Awareness Center, where he has taught Kundalini Yoga and meditation to thousands of people since 1975. He is the master teacher for the Awareness Center Level I Teacher Training program and has trained hundreds of yoga teachers. He practices yoga and meditation without fail every day of his life and eats a healthy vegetarian diet. With his white beard and sparkling eyes, many of his younger practice members call him "Santokh Claus" during the Christmas season. 11:11 Talk Radio dives deeply into conversations of truth, growth and wisdom to assit individuals in expanding personal growth, conscious living, humanity, and self-awareness. SIMRAN is a multiple award-winning author, artist, speaker and mystic of love, compassion and humanity. She creates media, art, books, and online courses that bridge humanity's experience and expression of darkness and light.  *** For the sacred work devoted to the Journey of the Soul, order my new self realization oracle trilogy of LIVING, BEING, KNOWING here. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Power of Doorways

Dreamvisions 7 Radio Network

Play Episode Listen Later Jan 24, 2025 55:42


New Year's Intention: The Power of Doorways Enjoy this gift from my heart to yours. Let us meet in the space  of intention. "What does your heart truly desire? Where does your spirit wish to roam?" In this enlightening episode, we explore deep questions that lead to personal growth and expansion. Tune in for insights that can transform your journey! "Grace is the bridge between our dreams and reality." - SIMRAN. Explore how to harness this grace in your life by listening to our latest episode. Ready to expand your consciousness? 11:11 Talk Radio dives deeply into conversations of truth, growth and wisdom to assist individuals in expanding personal growth, conscious living, humanity, and self-awareness. SIMRAN is a multiple award-winning author, artist, speaker and mystic of love, compassion and humanity. She creates media, art, books, and online courses that bridge humanity's experience and expression of darkness and light. For the sacred work devoted to the Journey of the Soul, order my new self realization oracle trilogy of LIVING, BEING, KNOWING here. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Meditera Mera - en interaktiv podd om meditation
Navid Zargham - Om meditation och vår sanna natur

Meditera Mera - en interaktiv podd om meditation

Play Episode Listen Later Jan 23, 2025 74:13


Navid Zargham har guidat människor i att upptäcka sin sanna natur sedan 2014. Detta genom sin metod Simran meditation, som är en serie av enkla och direkta övningar utformade för att få en upplevelsebaserad insikt vem man sant är, även kallat non-duality. Navid bor utanför Uppsala där han även har sitt egna retreatcenter Narumi. I det här avsnittet pratar vi bland annat om: Navids egna uppvaknande Språkets bristfälliga förmåga att beskriva verkligheten Hur ett självutforskande kan gå till Hur ofta och lätt vi skapar nya självbilder Att lära sig om det är nödvändigt eller inte att vända blicken inåt Att följa det som livet indikerar Om att bli bekväm med det obekväma och hur vår sanna natur kan hålla det som smärtar Att vara både och samtidigt Om vilka uttryck och egenskaper som finns i närvaro Tristess som en bra lärare Livets förgänglighet Att vetskapen om döden kan göra oss mer levande Att vi inte tar oss till närvaro, utan slutar att ta oss bort från den Navid avslutar sedan vårt samtal med att guida en självutforskande meditation (self-inquiry) Vill du veta mer om Navid, gå till hans hemsida www.simranmeditation.com Meditera Mera är en podcast från Mindfully, Sveriges meditationsapp. Du som lyssnar på vår podd får prova Mindfully kostnadsfritt i 30 dagar. Starta din provperiod på vår hemsida och hämta appen i App Store eller på Google Play. Gäller bara nya användare. För mer information om Mindfully, besök vår hemsida www.mindfully.nu.  

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Power of Divine Love

Dreamvisions 7 Radio Network

Play Episode Listen Later Jan 17, 2025 54:48


The Power of Divine Love with Marc Aronoff Love is something we all want. Divine Love, is something the deepest part of us years for. How do we bridge the two? And, in daily life, can we access Divine Love on a continual basis? "The God Chord” running through each of us—a chord of understanding beyond words, ever present and ever wise, connected to the divine. In a sense, all we have to do is remember and trust in this connection to cultivate the experience of God. A relationship with God is nurtured through both a formal practice of meditation and prayer and our informal awareness and intention to allow for a mystical and ultimately spiritual union with God, moment- to-moment. Humans have an innate capacity to experience and awaken a transcendent experience of God. Our potential to experience the mystical reality of life speaks to the role of Divine Love in the world today: God's presence in our every breath, calling us to awaken, feel, and see with eyes of Christ." - Marc Aronoff Marc Aronoff, MA, LMHC (Author) is a Mystic, psychotherapist, and award-winning playwright. After completing his BA in The Interpretation of Literature at Northwestern University, Marc lived abroad where he wrote for film and theatre and worked as a professional actor, dancer, and choreographer. Marc's drama, The Lantern Bearers, has won several awards, attracting an international audience, and was translated into Italian in 2021. Marc has published feature articles with several national publications and has worked as a ghostwriter with authors from around the world. Marc holds a Masters in Counseling Psychology and currently offers wellness programs including meditation and contemplative retreats with individuals and organizations across the nation. www.lovesguest.com Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Sikh Siyasat Podcasts
Dr. Sewak Singh’s Must Listen Lecture on Present Scenarios Concerning the Sikhs

Sikh Siyasat Podcasts

Play Episode Listen Later Jan 15, 2025 57:24


Event: Vichar Goshti Date: 9th January 2025 Venue: Gurdwara Sahib, Village Bhagrana, Fatehgarh Sahib Subject: Sewa, Simran and Shahadat in Sikhi This program was organized by local Sangat.

Know What You See with Brian Lowery
Bonus: "Wisdom and Practice" with Simran Jeet Singh

Know What You See with Brian Lowery

Play Episode Listen Later Jan 7, 2025 28:15


Here's another podcast we think you'll like. It's called "Wisdom and Practice" and it's hosted by one of Brian Lowry's guests this season, Simran Jeet Singh. Wisdom & Practice uncovers what insights we can gain from our ancient and modern traditions. Simran explores the different means of practice his guests have taken to discover new awareness of themselves, and how we can all cultivate more meaning, purpose and growth in our everyday lives.This episode of Wisdom and Practice features Katherine May, author of "Enchantment" and host of "How We Live Now". She and Simran discuss the loss of our sense of play, the power of attention, and how we can reconnect with a sense of wonder. You can find out more about "Wisdom and Practice" at Simran's website - simranjeetsingh.org - and subscribe to the show on all your favorite podcast platforms. Hope you enjoy!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
2024 in Post-Transformers Architectures (State Space Models, RWKV) [LS Live @ NeurIPS]

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 24, 2024 43:02


Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Of perennial interest, particularly at academic conferences, is scaled-up architecture research as people hunt for the next Attention Is All You Need. We have many names for them: “efficient models”, “retentive networks”, “subquadratic attention” or “linear attention” but some of them don't even have any lineage with attention - one of the best papers of this NeurIPS was Sepp Hochreiter's xLSTM, which has a particularly poetic significance as one of the creators of the LSTM returning to update and challenge the OG language model architecture:So, for lack of a better term, we decided to call this segment “the State of Post-Transformers” and fortunately everyone rolled with it.We are fortunate to have two powerful friends of the pod to give us an update here:* Together AI: with CEO Vipul Ved Prakash and CTO Ce Zhang joining us to talk about how they are building Together together as a quote unquote full stack AI startup, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms, with notable industry contributions from RedPajama v2, Flash Attention 3, Mamba 2, Mixture of Agents, BASED, Sequoia, Evo, Dragonfly, Dan Fu's ThunderKittens and many more research projects this year* Recursal AI: with CEO Eugene Cheah who has helped lead the independent RWKV project while also running Featherless AI. This year, the team has shipped RWKV v5, codenamed Eagle, to 1.5 billion Windows 10 and Windows 11 machines worldwide, to support Microsoft's on-device, energy-usage-sensitive Windows Copilot usecases, and has launched the first updates on RWKV v6, codenamed Finch and GoldFinch. On the morning of Latent Space Live, they also announced QRWKV6, a Qwen 32B model modified with RWKV linear attention layers. We were looking to host a debate between our speakers, but given that both of them were working on post-transformers alternativesFull Talk on YoutubePlease like and subscribe!LinksAll the models and papers they picked:* Earlier Cited Work* Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention* Hungry hungry hippos: Towards language modeling with state space models* Hyena hierarchy: Towards larger convolutional language models* Mamba: Linear-Time Sequence Modeling with Selective State Spaces* S4: Efficiently Modeling Long Sequences with Structured State Spaces* Just Read Twice (Arora et al)* Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. * To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. * Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives 11.0±1.3 points of improvement, averaged across 16 recurrent LMs and the 6 ICL tasks, with 11.9× higher throughput than FlashAttention-2 for generation prefill (length 32k, batch size 16, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides 99% of Transformer quality at 360M params., 30B tokens and 96% at 1.3B params., 50B tokens on average across the tasks, with 19.2× higher throughput for prefill than FA2.* Jamba: A 52B Hybrid Transformer-Mamba Language Model* We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. * Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. * This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU.* Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. * We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.* SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers* We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: * (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. * (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. * (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. * (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. * As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. * RWKV: Reinventing RNNs for the Transformer Era* Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. * We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.* Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. * We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.* LoLCATs: On Low-Rank Linearizing of Large Language Models* Recent works show we can linearize large language models (LLMs) -- swapping the quadratic attentions of popular Transformer-based LLMs with subquadratic analogs, such as linear attention -- avoiding the expensive pretraining costs. However, linearizing LLMs often significantly degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs. * We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. * We base these steps on two findings. * First, we can replace an LLM's softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss ("attention transfer").* Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA). * LoLCATs significantly improves linearizing quality, training efficiency, and scalability. We significantly reduce the linearizing quality gap and produce state-of-the-art subquadratic LLMs from Llama 3 8B and Mistral 7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. * Furthermore, LoLCATs does so with only 0.2% of past methods' model parameters and 0.4% of their training tokens. * Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50x larger than prior work). * When compared with prior approaches under the same compute budgets, LoLCATs significantly improves linearizing quality, closing the gap between linearized and original Llama 3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.Timestamps* [00:02:27] Intros* [00:03:16] Why Scale Context Lengths? or work on Efficient Models* [00:06:07] The Story of SSMs* [00:09:33] Idea 1: Approximation -> Principled Modeling* [00:12:14] Idea 3: Selection* [00:15:07] Just Read Twice* [00:16:51] Idea 4: Test Time Compute* [00:17:32] Idea 2: Hardware & Kernel Support* [00:19:49] RWKV vs SSMs* [00:24:24] RWKV Arch* [00:26:15] QWRKWv6 launch* [00:30:00] What's next* [00:33:21] Hot Takes - does anyone really need long context?Transcript[00:00:00] AI Charlie: We're back at Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field.[00:00:24] AI Charlie: 200 of you joined us in person throughout the day, with over 2200 watching live online. Thanks Our next keynote covers the State of Transformers alternative architectures, with a special joint presentation with Dan Fu of Together AI and Eugene Chia of Recursal AI and Featherless AI. We've featured both Together and Recursal on the pod before, with CEO Veepal Vedprakash introducing them.[00:00:49] AI Charlie: And CTO CE Zhang joining us to talk about how they are building together together as a quote unquote full stack AI startup from the lowest level kernel and systems [00:01:00] programming to the highest level mathematical abstractions driving new model architectures and inference algorithms with notable industry contributions from Red Pajama V2, Flash Attention 3, Mamba 2, Mixture of Agents.[00:01:15] AI Charlie: Based, Sequoia, Evo, Dragonfly, Danfoo's Thunder Kittens, and many more research projects this year. As for Recursal and Featherless, we were the first podcast to feature RWKV last year, and this year the team has shipped RWKV v5, codenamed Eagle, to 1. 5 billion Windows 10 and Windows 11 machines worldwide to support Microsoft's on device, end Energy Usage Sensitive Windows Copilot Use Cases and has launched the first updates on RWKV v6, codenamed Finch and Goldfinch.[00:01:53] AI Charlie: On the morning of Latent Space Live, they also announced QRdata UKv6, a QEN32B model [00:02:00] modified with RDWKV linear attention layers. Eugene has also written the most single most popular guest post on the Latent Space blog this year. Yes, we do take guest posts on what he has discovered about the H100 GPU inference NeoCloud market since the successful launch of Featherless AI this year.[00:02:20] AI Charlie: As always, don't forget to check the show notes for the YouTube link to their talk as well as their slides. Watch out and take care.[00:02:27] Intros[00:02:27] Dan Fu: Yeah, so thanks so much for having us. So this is going to be a little bit of a two part presentation. My name is Dan. I'm at Together AI, and I'll be joining UCSD as faculty in about a year. And Eugene, you want to introduce yourself?[00:02:46] Eugene Cheah: Eugene, I lead the art activity team, and I, I'm CEO of Featherless, and we both work on this new post transformer architecture space.[00:02:55] Dan Fu: Yeah, so yeah, so today we're really excited to talk to you a little bit [00:03:00] about that. So first I'm going to give a broad overview of kind of the last few years of progress in non post transformer architectures. And then afterwards Eugene will tell us a little bit about the latest and the greatest and the latest frontier models in this space.[00:03:16] Why Scale Context Lengths? or work on Efficient Models[00:03:16] Dan Fu: So, the story starts with Scaling. So this is probably a figure or something like this that you've seen very recently. Over the last five to six years, we've seen models really scale up in parameter size, and that's brought with it a bunch of new capabilities, like the ability to talk to you and tell you sometimes how to use your Colab screens.[00:03:35] Dan Fu: But another place where we've seen scaling especially recently is scaling in context length. So this can mean Having more text inputs for your models, but it can also mean things like taking a lot of visual token inputs image inputs to your models or generating lots of outputs. And one thing that's been really exciting over the last few months or so is that we're, we're seeing scaling, not only during training time, but also [00:04:00] during test time.[00:04:00] Dan Fu: So this is one of the, the, this is the iconic image from the OpenAI 01 release. Not only are we starting to scale train time compute, but we're also starting to scale test time compute. Now if you're familiar with our attention and our transformer architectures today, this graph on the right might look a little bit scary.[00:04:19] Dan Fu: And one of the reasons is that the implications are a little bit Interesting. So what does it mean if we want to continue having smarter and smarter models? Do we just need to start building bigger, bigger data centers, spending more flops? Is this this little Dolly 3, we need more flops, guys? Is this going to be the future of all of AI?[00:04:39] Dan Fu: Or is there a better way, another path forward? Maybe we can get the same capabilities that we've gotten used to, But for a lot less compute, a lot less flops. And one of the things that we're going to talk about today is specifically looking at that core attention operator in some of these models.[00:04:57] Dan Fu: And the reason is that so this is just some, some [00:05:00] basic you know, scaling curves, but attention has compute that scales quadratically in the context length. So that means that if you're doing something like test time compute and you want to spend a bunch of tokens thinking about what comes next, the longer that that goes the, the, the more tokens you spend on that, that compute grows quadratically in that.[00:05:19] Dan Fu: One of the questions that we're interested in is, can we take that basic sequence model, that basic sequence primitive at the bottom, and get it to scale better? Can we scale in, let's say, n to the 3 halves or n log n? So in, in the first part of the talk, so we just went over the introduction. What I'm gonna do over the next few slides is just talk about some of the key advances and ideas that have shown over the past few years since maybe early 2020 to, to now that shown promise that this might actually be possible.[00:05:48] Dan Fu: That you can actually get potentially the same quality that we want while scale, while scaling better. So to do that, we're and, and basically the, the story that we're gonna look is we're gonna start to see [00:06:00] how. So this is a basic graph of just the past couple years of progress of perplexity where that blue line, that dotted blue line, is attention.[00:06:07] The Story of SSMs[00:06:07] Dan Fu: It's your basic transformer, full dense attention. And then the dots coming down are some of the methods that you'll see in this presentation today. We're going to turn the clock back all the way to 2020. So this, this, this question of can we make attention subquadratic? Basically, as soon as we said attention is all you need, People started asking this question.[00:06:28] Dan Fu: So we have this quadratic attention operator. Can we do better? I'll briefly talk about why attention is quadratic. And the basic thing that happens, if you're not familiar, is that you have these inputs, these keys and queries. And what you do in this attention matrix, this S matrix over here, is that you're using, you're comparing every token in your input to every other token.[00:06:49] Dan Fu: So when I try to do something like upload a whole book to Gemini, what happens beyond the Maybe not Gemini, because we don't necessarily know what architecture is. But let's say we upload it to LLAMA, what happens beyond [00:07:00] the scenes, behind the scenes, is that it's going to take every single word in that book and compare it to every other word.[00:07:05] Dan Fu: And this has been a really, it's, it's led to some pretty impressive things. But it's kind of a brute forcing of the way that you would try to interpret a interpret something. And what attention does in particular is the, and then what attention, sorry, don't want to. Okay, no, no laser pointer. What, what attention does afterwards is that instead of always operating in this quadratic thing, it takes a row wise softmax over this matrix, and then multiplies it by this values matrix.[00:07:32] Dan Fu: So, one of the key points to notice is that the output size is always going to be the same as the inputs, at least in standard self attention. So one of the first things that folks tried to do around 2020 is this thing called linear attention, which is just, just noticing that if we take out this softmax from here, if we take out this non linearity in the middle of the attention operation, and then if you compute the keys and the values operation first, you actually never hit this quadratic bottleneck.[00:07:57] Dan Fu: So that, that's potentially a way [00:08:00] to get a lot more computationally efficient. And there are various ways to do this by basically using feature maps or try to approximate this overall attention computation. But some of this work sort of started to hit a wall in 2020. And the basic challenges were, were two.[00:08:16] Dan Fu: So one was quality. It was back then, it was kind of hard to, to get good quality with these linear attention operators. The other one was actually hardware efficiency. So these, this feature map that was just shown by a simplify simplify here. Actually ends up being quite computationally expensive if you just implement it naively.[00:08:34] Dan Fu: So you started having these operators that not only were you sure, you're not really sure if they have the same quality, but also they're actually just wall clock slower. So you kind of end up getting the worst of both worlds. So this was the the stage. So that kind of sets the stage for four years ago.[00:08:49] Dan Fu: Keep this in mind because linear attention is actually going to come back in a few years once we have a better understanding. But one of the works that started kicking off this, this [00:09:00] mini revolution in post transformer architectures was this idea called states based model. So here the seminal work is, is one about our work queue in 2022.[00:09:09] Dan Fu: And this, this piece of work really brought together a few ideas from, from some long running research research lines of work. The first one was, and this is really one of the keys to, to closing the gap in quality was just using things that, that if you talk to a, a, an electrical engineer off the street, they might know off, off the, like the back of their hand.[00:09:33] Idea 1: Approximation -> Principled Modeling[00:09:33] Dan Fu: But taking some of those properties with how we model dynamical systems in signal processing and then using those ideas to model the inputs, the, the text tokens in, for example a transformer like Next Token Prediction Architecture. So some of those early states-based model papers were looking at this relatively, relatively simple recurrent update model that comes from maybe chapter one of a signal processing class.[00:09:59] Dan Fu: But then using [00:10:00] some principle theory about how you should do that recurrent update in order to really get the most that you can out of your hidden state, out of your out of your sequence. So that, that was one key idea for quality and. When this was eventually realized, you started to see a bunch of benchmarks that were pretty sticky for a few years.[00:10:20] Dan Fu: Things like long range arena, some long sequence evaluation benchmarks, There was stuff in time series, time series analysis. They started to, you started to see the quality tick up in meaningful ways. But the other key thing that What's so influential about these states based models is that they also had a key idea about how you can compute these things efficiently.[00:10:45] Dan Fu: So if you go back to your machine learning 101 class where you learned about RNNs, one thing that you may have learned is that they don't paralyze as well as detention, because if you just run them naively, you have to do this kind of sequential update to process new tokens, [00:11:00] whereas in attention, you can process all the tokens in parallel at one time.[00:11:04] Dan Fu: One of the key insights behind the S4 paper was that these recurrent models, you could take them and you could also formulate them as a convolution. And in particular, with a convolution, you could, instead of using a PyTorch conv1d operation, you can compute that with the FFT. And that would give you n log n compute in the in the sequence length n with an operator that was relatively well optimized for modern hardware.[00:11:28] Dan Fu: So those are really, I'd say, the two key ideas in 2022 that started allowing these breakthroughs to happen in these non transformer architectures. So, these ideas about how to principally model sorry, how to model the recurrent updates of a mo of, of a sequence in a principled way, and also these key ideas in how you can compute it efficiently by turning it into a convolution and then scaling it up with the FFT.[00:11:53] Dan Fu: Along those same lines, so afterwards we started putting out some work on specialized kernels, so just [00:12:00] like we have flash attention for transformers, we also have works like flash fft conf, and if you look at these lines of work oftentimes when, whenever you see a new architecture, you see a new primitive one of the, one of the table stakes now is, do you have an efficient kernel so that you can actually get wall clock speed up?[00:12:14] Idea 3: Selection[00:12:14] Dan Fu: So by 2022, We are starting to have these models that had promising quality primitives, but and, and also promising wall clocks. So you could actually see regimes where they were better than transformers in meaningful ways. That being said, there were, there's still sometimes a quality gap, particularly for language modeling.[00:12:33] Dan Fu: And because languages, It's so core to what we do in sequence modeling these days the, the next, the next key idea that I'm going to talk about is this idea of selection mechanisms. And this is basically an idea of, so you have this recurrent state that you're keeping around that just summarizes everything that, that came before.[00:12:50] Dan Fu: And to get a good sequence model, one of the things that you really need to be able to do is have the model learn what's the best way to pick out pieces from that recurrent [00:13:00] state. So one of the, one of the major ideas here in a line of work called H3, Hungry Hungry Hippos, and also these hyena models were One way you can do this is by just adding some simple element wise gates.[00:13:13] Dan Fu: So versions of these ideas have been around for decades. If you squint at the LSTM paper you, you can probably find, find this gating mechanism. But turns out you can take those old ideas, add them into these new. state space models, and then you can see quality start to pick up. If you've heard of the Mamba model, this also takes the selection to the next level by actually making some changes in that fundamental recurrent state space.[00:13:40] Dan Fu: So, it's not only just this gating that happens around the SSM layer, but also you can actually make The ABCD matrices of your state space model, you can make them data dependent, which will allow you to even better select out different pieces from your hidden state depending on what you're seeing. I'll also point out if you look at the [00:14:00] bottom right of this figure, there's this little triangle with a GPU SRAM, GPU HBM, and this, this is just continuing that trend of when you have a new architecture you, you, you also release it with a kernel to, to, to show that it is hardware efficient, that it, that it can be hardware efficient on modern hardware.[00:14:17] Dan Fu: The, the, one of the next cool things that happened is once we had this understanding of these are the basic pieces, these are the basic principles behind some of the sequence models linear attention actually started to come back. So in earlier this year, there was a model called BASED the, from Simran Arora and, and some other folks, that combined a more principled version of linear attention that basically the, the, the, the two second summary is that it used a Taylor approximation of the softmax attention, combined that with a simple sliding window attention and was starting to able, starting to be able to expand the Pareto frontier of how much data can you recall from your sequence, versus how small is your recurrent state size.[00:14:58] Dan Fu: So those orange dots [00:15:00] are, at the top there, are just showing smaller sequences that can recall more memory.[00:15:07] Just Read Twice[00:15:07] Dan Fu: And the last major idea I think that has been influential in this line of work and is very relatively late breaking just a few months ago, is just the basic idea that when you have these models that are fundamentally more efficient in the sequence length, you maybe don't want to prompt them or use them in exactly the same way.[00:15:26] Dan Fu: So this was a really cool paper called Just Read Twice, also from Simran. That basically said, hey, all these efficient models can process tokens so much more efficiently than transformers that they can sometimes have unfair advantages compared to a simple transformer token. So, or sorry, a simple transformer model.[00:15:44] Dan Fu: So take, for example the standard, the standard use case of you have some long document, you're going to pass it in as input, and then you're going to ask some question about it. One problem you might imagine for a recurrent model where you have a fixed state size is, let's say that [00:16:00] you're. Article is very long, and you're trying to ask about some really niche thing.[00:16:04] Dan Fu: You can imagine it might be hard for the model to know ahead of time what information to put into the hidden state. But these, these, these models are so much more efficient that you can do something really stupid, like, you can just put the document write down the document, write down the question, write down the document again, and then write down the question again, and then this time, the second time that you go over that document, you know exactly what to look for.[00:16:25] Dan Fu: And the cool thing about this is, so this is, And this this results in better quality, especially on these recall intensive tasks. But the other interesting thing is it really takes advantage of the more efficient architectures that, that we're having here. So one of the other, I think, influential ideas in this line of work is if you change the fundamental compute capabilities of your model and the way that it scales, you can actually start to query it at test time differently.[00:16:51] Idea 4: Test Time Compute[00:16:51] Dan Fu: And this actually, of course, goes back to those slides on test time compute. So while everybody's looking at, say, test time compute for big transformer models, [00:17:00] I think potentially a really interesting research question is, how can you take those and how does it change with this new next generation of models?[00:17:09] Dan Fu: So the, I'll just briefly summarize what some of those key ideas were and then talk and then show you briefly kind of what the state of the art is today. So, so the four key ideas are instead of just doing a simple linear attention approximation, instead take ideas that we know from other fields like signal processing, do a more principled approach to your modeling of the sequence.[00:17:32] Idea 2: Hardware & Kernel Support[00:17:32] Dan Fu: Another key idea throughout all these lines of work is you really want. Hardware and kernel support from day one. So, so even if your model is theoretically more efficient if somebody goes and runs it and it's two times slower one of the things that, that we've learned is that if, if you're in that situation, it's, it's just gonna be dead on arrival.[00:17:49] Dan Fu: So you want to be designing your architectures one of the key, key machine learning ideas that has been important for the quality is just making sure that you encode different ways that you can [00:18:00] select from your hidden state and, and really focus on that as a key decider of quality. And finally, I think one of the, the, the emerging new, new things for, for this line of work and something that's quite interesting is, What are the right test time paradigms for these models?[00:18:15] Dan Fu: How do they change relative to relative to what you might do for a standard transformer? I'll briefly end this section. So I've labeled this slide where we are yesterday because Eugene is going to talk about some new models that he released literally this morning. But as of yesterday, some of the really cool results out of the, these efficient alternative models were so AI2 trained this hybrid MOE called Jamba.[00:18:40] Dan Fu: That, that, that seems, that is currently the state of the art for these non transformer architectures. There's this NVIDIA and MIT put out this new diffusion model called SANA recently that one of their key key observations is that you can take a standard diffusion transformer diffusion model, replace the layers with linear [00:19:00] attention, and then that lets you scale to much larger much larger images, much, much Much larger sequences more efficiently.[00:19:07] Dan Fu: And and one thing that I don't think anybody would have called when a few years ago is that one of those gated SSM, gated states based models ended up on the cover of Science because a great group of folks went and trained some DNA models. So that's Michael Polley, Eric Yuen from from Stanford and the Arc Institute.[00:19:26] Dan Fu: So it's, we're really at an exciting time in 2024 where these non transformer, post transformer architectures are showing promise across a wide range. Across a wide range of, of modalities, of applications, and, and of tasks. And with that, I'll pass it on to Eugene, who can tell you a little bit about the latest and greatest with RWKV.[00:19:49] RWKV vs SSMs[00:19:49] Eugene Cheah: So, that's useful? Yeah. You're talking to here. Oh, I'm talking to here. Okay. So, yeah, two streams. Yeah. So, I think one common questions that we tend to get asked, right, is what's the difference between [00:20:00] RWKV and state space? So I think one of the key things to really understand, right the difference between the two groups, right, is that we are actually more like an open source, random internet meets academia kind of situation.[00:20:11] Eugene Cheah: Like, most of us never wrote any paper, but we, we basically look at RNNs and linear intention when intention is all you need came out, and then we decided to like, hey there is a quadratic scaling problem. Why don't we try fixing that instead? So, so, so we end up developing our own branch, but we end up sharing ideas back and forth.[00:20:30] Eugene Cheah: So, and, and we do all this actively in Discord, GitHub, etc. This was so bad for a few years, right, that basically, the average group's H index was so close to zero, right, Illuter. ai actually came in and helped us write our first paper. Great, now our H index is now three, apparently. So, so, so, but, but the thing is, like, a lot of these experiments led to results, and, and, essentially, essentially, we we took the same ideas from linear attention, [00:21:00] and we built on it.[00:21:01] Eugene Cheah: So, to take a step back into, like, how does RWKB handle its own attention mechanic and achieve the same goals of, like, O and compute, respectively, and in focus of our overall goal to make AI accessible to everyone, regardless of language, nation, or compute, that's our goal. We actually train our models primarily on over a hundred languages, which is another topic altogether.[00:21:23] Eugene Cheah: And our goal is to train to even 200 languages to cover all languages in the world. But at the same time, we work on this architecture, To lower the compute cost so that people can run it on Raspberry Pis and on anything. So, how did RWKB break the dependency of LSTM token flow? Because I think to understand architecture, right, it's probably easier to understand it from the RNN lens.[00:21:46] Eugene Cheah: Because that's where we built on. We all, we all state space kind of like try to, try to start anew and took lessons from that and say, So there's a little bit of divergence there. And AKA, this our version of linear attention. So to take step back [00:22:00] all foundation models, be it transformers or non transformers at a very high level, right?[00:22:05] Eugene Cheah: Pumps in the token. I mean, text that things into embeddings and go through a lot of layers. Generate a lot of states where the QKV cache or be iron in states or RW KB states. And outputs and embedding, they are not the same thing. And we just take more layers and more embeddings. And somehow that magically works.[00:22:23] Eugene Cheah: So, if you, if you remember your ancient RNN lessons which we, which we, which we we call best learning these days the general idea is that you have the embedding information flowing all the way up, and when, and you take that information and you flow it back down, and then you process it as part of your LSTM layers.[00:22:41] Eugene Cheah: So, this is how it generally works. Kapati is quoted saying that RNNs are actually unreasonably effective. The problem is this is not scalable. To start doing work on the second token, you need to wait for the first token. And then you need to, and likewise for the third token and fourth token, yada yada.[00:22:55] Eugene Cheah: That is CPU land, not GPU land. So, so, so, you [00:23:00] can have a H100 and you can't even use 1 percent of it. So, so that's kind of why RNNs didn't really take off in the direction that we wanted, like, billions of parameters when it comes to training. So, what did RDAP KV version 0 do? Boom. We just did the dumbest, lamest thing.[00:23:13] Eugene Cheah: Sorry, this is the bottleneck for RNN. We did the dumb thing of removing that line. And it kind of worked. It trained. It sucked, but it kind of worked. Then we were like, hey, then no one cared because the loss was crap, but how do we improve that? And that's essentially where we move forward, because if you see this kind of flow, right, you can actually get your GPU saturated quickly, where it essentially cascades respectively.[00:23:41] Eugene Cheah: So I'm just waiting for this to loop again. So it's like, once you get your first layer, your token to be computed finish. You start to cascade your compute all the way until you are, Hey, I'm using 100 percent of the GPU. So we, we worked on it, and we started going along the principle of that as long as we keep this general architecture [00:24:00] where, where we can cascade and, and be highly efficient with our architecture, nothing is sacred in our architecture.[00:24:06] Eugene Cheah: And we have done some crazy ideas. In fact, you ask us, if you ask me to explain some things in the paper, right, officially in the paper, I'll say we had this idea and we wrote it this way. The reality is someone came with a code, we tested it, it worked, and then we rationalized later. So, so the general[00:24:24] RWKV Arch[00:24:24] Eugene Cheah: The idea behind rwkbr is that we generally have two major blocks that we do.[00:24:30] Eugene Cheah: We call time mix and channel mix. And time mix generally handles handles long term memory states, where essentially, where essentially where we apply the matrix multiplication and Cilu activation functions into processing an input embedding and an output embedding. I'm oversimplifying it because this, This calculation changed every version and we have, like, version 7 right now.[00:24:50] Eugene Cheah: ChannelMix is similar to Base in the sense that it does shorter term attention, where it just looks at the sister token, or the token before it, because [00:25:00] there's a shift in the token shift matrix. I don't really want to go too much into the papers itself, because, like, we do have three papers on this.[00:25:09] Eugene Cheah: Basically, RWKB, RNN for the transformer, ERA, Ego and Pinch, RWKB, Matrix Value State. This is the updated version 5, version 6. And Goldfinch is our, is, is, is, is our hybrid model respectively. We are writing the paper already for V seven and which is, which is for R wk V seven. Called, named Goose, or architectures are named by Bird.[00:25:30] Eugene Cheah: And, I'm going to cover as well, qrwkb, and mama100k, and rwkb, and Where did that lead to? Great! Because we are all GPU poor and to be clear, like, most of this research is done, like, only on a handful H100s, which I had one Google researcher told me that was, like, his experiment budget for a single researcher.[00:25:48] Eugene Cheah: So, our entire organization has less compute than a single researcher in Google. So We, we, one of the things that we explored into was to how do we convert transformer models instead? Because [00:26:00] someone already paid that billion dollars, a million dollars onto training, so why don't we take advantage of those weights?[00:26:05] Eugene Cheah: And, and to, I believe, together AI worked on the lockets for, for the Lambda side of things, and, and we took some ideas from there as well, and we essentially did that for RWKB.[00:26:15] QWRKWv6 launch[00:26:15] Eugene Cheah: And that led to, Q RWKB6, which we just dropped today, a 32 bit instruct preview model, where we took the Quen 32 bit instruct model, freeze the feedforward layer, remove the QKB attention layer, and replace it with RWKB linear layers.[00:26:32] Eugene Cheah: So to be clear, this means we do not have the rwkv channel mix layer, we only have the time mix layer. But but once we do that, we train the rwkv layer. Important is that the feedforward layer needs to be frozen, so the new attention can be learned. And then we unfreeze the feedforward layer, and train all the layers together with a custom learning rate schedule, so that they can learn how to work together.[00:26:54] Eugene Cheah: The end result, surprisingly, And, to be honest, to the frustration of the R. W. [00:27:00] KV MOE team, which ended up releasing the model on the same day, was that, with just a few hours of training on two nodes, we managed to get it to be on par, kind of, with the original QUAN32B model. So, in fact, when the first run, right, that completely confused us, it was like, and I was telling Daniel Goldstein, Smirky, who kind of leads most of our research coordination, When you pitched me this idea, you told me at best you'll get the same level of performance.[00:27:26] Eugene Cheah: You didn't tell me the challenge and score and Winograd score will shoot up. I don't know what's happening there. But it did. MMLU score dropping, that was expected. Because if you think about it, when we were training all the layers, right, we were essentially Like, Frankenstein this thing, and we did brain damage to the feedforward network layer 2 with the new RWKB layers.[00:27:47] Eugene Cheah: But, 76%, hey, somehow it's retained, and we can probably further train this. We didn't even spend more than 3 days training this, so there's a lot more that can be done, hence the preview. This brings up [00:28:00] a big question, because We are already now in the process of converting to 7TB. We are now, this is actually extremely compute efficient to test our attention mechanic.[00:28:10] Eugene Cheah: It's like, it becomes a shortcut. We can, we are already planning to do our version 7 and our hybrid architecture for it. Because we don't need to train from scratch. And we get a really good model out of it. And the other thing that is uncomfortable to say is that because we are doing right now on the 70b is that if this scales correctly to 128k context length, I'm not even talking about a million 128, majority of enterprise workload today is just on 70b at under 32k context length.[00:28:41] Eugene Cheah: That means if this works and the benchmark matches it, It means we can replace the vast majority of current AI workload, unless you want super long context. And then sorry, can someone give us more GPUs? Because we do need the VRAM for super long context, sadly. So yeah, that's what we are working on, and essentially, [00:29:00] we are excited about this to just push it further.[00:29:02] Eugene Cheah: And this conversion process, to be clear, I don't think it's going to be exclusive to RWKB. It probably will work for Mamba as well, I don't see why not. And we will probably see more ideas, or more experiments, or more hybrids, or Yeah, like, one of the weirdest things that I wanted to say outright, and I confirmed this with the Black Mamba team and the Jamba team, which because we did the GoFinch hybrid model, is that none of us understand why a hard hybrid with a state based model to be R.[00:29:28] Eugene Cheah: QA state space and transformer performs better when, than the baseline of both. It's like, it's like when you train one, you expect, and then you replace, you expect the same results. That's our pitch. That's our claim. But somehow when we jam both together, it outperforms both. And that's like one area of emulation that, like, we only have four experiments, plus four teams, that a lot more needs to be done.[00:29:51] Eugene Cheah: But, but these are things that excite me, essentially, because that is what it's potentially we can move ahead for. Which brings us to what comes next.[00:30:00] What's next[00:30:00] [00:30:00][00:30:00] Dan Fu: So, this part is kind of just some, where we'll talk a little bit about stuff that, that we're excited about. Maybe have some wild speculation on, on what, what's, what's coming next.[00:30:12] Dan Fu: And, of course this is also the part that will be more open to questions. So, a couple things that, that I'm excited about is continued hardware model co design for, for these models. So one of the things that we've put out recently is this library called ThunderKittens. It's a CUDA library.[00:30:29] Dan Fu: And one of the things that, that we found frustrating is every time that we built one of these new architectures, and I'm sure you had the exact same experience, we'd have to go and spend two months in CUDA land, like writing these, these new efficient things. And. If we decided to change one thing in PyTorch, like one line of PyTorch code is like a week of CUDA code at least.[00:30:47] Dan Fu: So one of our goals with, with a library like Thunderkitten, so we, we just broke down what are the key principles, what are the key hardware things what are the key, Compute pieces that you get from the hardware. So for example on [00:31:00] H100 everything is really revolves around a warp group matrix multiply operation.[00:31:06] Dan Fu: So you really want your operation to be able to split into relatively small matrix, matrix multiply operations. So like multiplying two 64 by 64 matrices, for example. And so if you know that ahead of time when you're designing your model, that probably gives you you know, some information about how you set the state sizes, how you set the update, how you set the update function.[00:31:27] Dan Fu: So with Thunderkittens we basically built a whole library just around this basic idea that all your basic compute primitives should not be a float, but it should be a matrix, and everything should just be matrix compute. And we've been using that to, to try to both re implement some existing architectures, and also start to design code.[00:31:44] Dan Fu: Some new ones that are really designed with this core with a tensor core primitive in mind. Another thing that that we're, that at least I'm excited about is we, over the last four or five years, we've really been looking at language models as the next thing. But if you've been paying [00:32:00] attention to Twitter there's been a bunch of new next generation models that are coming out.[00:32:04] Dan Fu: So there, there are. So, video generation models that can run real time, that are supported by your mouse and your keyboard, that I'm told if you play with them that, you know, that they only have a few seconds of memory. Can we take that model, can we give it a very long context length so that you could actually maybe generate an entire game state at a time?[00:32:25] Dan Fu: What does that look like for the model? You're certainly not going to do a giant quadratic attention computation to try to run that. Maybe, maybe use some of these new models, or some of these new video generation models that came out. So Sora came out I don't know, two days ago now. But with super long queue times and super long generation times.[00:32:43] Dan Fu: So that's probably a quadratic attention operation at the, at the bottom of it. What if we could remove that and get the same quality, but a lot faster generation time? Or some of the demos that we saw from Paige earlier today. You know, if I have a super long conversation with my [00:33:00] Gemini bot, what if I wanted to remember everything that it's seen in the last week?[00:33:06] Dan Fu: I mean, maybe you don't for personal reasons, but what if I did, you know? What does that mean for the architecture? And I think, you know, that's certainly something I'm pretty excited about. I'm sure you're excited about it too. So, I think we were supposed to have some hot takes, but I honestly don't remember what our hot takes were.[00:33:21] Hot Takes - does anyone really need long context?[00:33:21] Eugene Cheah: Yeah, including the next slide. Hot takes, yes, these are our[00:33:25] Dan Fu: hot takes.[00:33:25] Eugene Cheah: I think the big one on Twitter that we saw, that we shared, was the question is like, is RAG relevant? In the case of, like, the future of, like, state based models?[00:33:38] Dan Fu: Let's see, I haven't played too much with RAG. But when I have. I'll say I found it was a little bit challenging to do research on it because we had this experience over and over again, where you could have any, an embedding model of any quality, so you could have a really, really bad embedding model, or you could have a really, really [00:34:00] good one, By any measure of good.[00:34:03] Dan Fu: And for the final RAG application, it kind of didn't matter. That's what I'll say about RAG while I'm being recorded. I know it doesn't actually answer the question, but[00:34:13] Eugene Cheah: Yeah, so I think a lot of folks are like, extremely excited of the idea of RWKB or State Space potentially having infinite context.[00:34:21] Eugene Cheah: But I think the reality is that when we say infinite context, we just mean a different kind of infinite context, or you, or as it's previously covered, you need to test the model differently. So, think of it more along the lines of the human. Like, I don't remember what I ate for breakfast yesterday.[00:34:37] Eugene Cheah: Yeah, that's the statement that I'll say. And And we humans are not quadratic transformers. If we did, if let's say we increased our brain size for every second we live, we would have exploded by the time we are 5 years old or something like that. And, and I think, I think basically fundamentally for us, right, be it whether we, regardless of whether RWKB, statespace, XLSTM, [00:35:00] etc, our general idea is that instead of that expanding state, that increase in computational cost, what if we have a fixed state size?[00:35:08] Eugene Cheah: And Information theory detects that that fixed state size will have a limit. Just how big of a limit is a question, like, we, like, RWKB is running at 40 megabytes for, for its state. Its future version might run into 400 megabytes. That is like millions of tokens in, if you're talking about mathematically, the maximum possibility.[00:35:29] Eugene Cheah: It's just that I guess we were all more inefficient about it, so maybe we hit 100, 000. And that's kind of like the work we are doing, trying to like push it and maximize it. And that's where the models will start differing, because it will choose to forget things, it will choose to remember things. And that's why I think that there might be some element of right, but it may not be the same right.[00:35:49] Eugene Cheah: It may be the model learn things, and it's like, hmm, I can't remember that, that article. Let me do a database search, to search. Just like us humans, when we can't remember the article in the company. We do a search on Notion. [00:36:00][00:36:00] Dan Fu: I think something that would be really interesting is if you could have facts that are, so right now, the one intuition about language models is that all those parameters are around just to store random facts about the world.[00:36:14] Dan Fu: And this intuition comes from the observation that if you take a really small language model, it can do things like talk to you, or kind of has like the The style of conversation, it can learn that, but where it will usually fall over compared to a much larger one is it'll just be a lot less factual about things that it knows or that it can do.[00:36:32] Dan Fu: But that points to all those weights that we're spending, all that SGD that we're spending to train these models are just being used to store facts. And we have things like databases that are pretty good at storing facts. So I think one thing that would be really interesting is if we could actually have some sort of outside data store that a language model can can look at that that maybe is you know, has has some sort of gradient descent in it, but but would be quite interesting.[00:36:58] Dan Fu: And then maybe you could edit it, delete [00:37:00] facts, you know, change who's president so that it doesn't, it doesn't get lost.[00:37:04] Vibhu: Can we open up Q& A and hot takes for the audience? I have a hot take Q& A. Do these scale? When, when 405B state space model, RAG exists, no one does long context, who's throwing in 2 million token questions, hot takes?[00:37:24] Dan Fu: The, the who's throwing in 2 million token question, I think, is, is a really good question. So I actually, I was going to offer that as a hot take. I mean, my hot take was going to be that long context doesn't matter. I know I just gave a whole talk about it, but you know, what, what's the point of doing research if you can't, you know, play both sides.[00:37:40] Dan Fu: But I think one of the, so I think for both of us, the reason that we first got into this was just from the first principled questions of there's this quadratic thing. Clearly intelligence doesn't need to be quadratic. What is going on? Can we understand it better? You know, since then it's kind of turned into a race, which has [00:38:00] been exciting to watch, like, how much context you can take in.[00:38:03] Dan Fu: But I think it's right. Nobody is actually putting in a two million context prompt into these models. And, and, you know, if they are, maybe we can go, go You know, design a better model to do that particular thing. Yeah, what do you think about that? So you've also been working on this. Do you think long context matters?[00:38:19] Eugene Cheah: So I'm going to burn a bit. How many of you remember the news of Google Gemini supporting 3 million contacts, right? Raise your hand.[00:38:28] Vibhu: Yeah, 2 million.[00:38:29] Eugene Cheah: Oh, it's 2 million.[00:38:31] Eugene Cheah: Yeah, how many of you actually tried that? See?[00:38:34] Vibhu: I use it a lot. You? You work for MindsTV. I use it a lot.[00:38:41] Eugene Cheah: So, for some people that has used, and I think, I think that's the, that's might be, like, this is where my opinion starts to differ, because I think the big labs may have a bigger role in this, because Like, even for RWKB, even when we train non contacts, the reason why I say VRAM is a problem is that because when we did the, we need to backprop [00:39:00] against the states, we actually need to maintain the state in between the tokens by the token length.[00:39:05] Eugene Cheah: So that means we need to actually roll out the whole 1 million contacts if we are actually training 1 million. Which is the same for transformers, actually, but it just means we don't magically reuse the VRAM consumption in the training time space. So that is one of the VRAM bottlenecks, and I'm neither OpenAI nor Google, so donate GPUs if you have too much of them.[00:39:27] Eugene Cheah: But then, putting it back to another paradigm, right, is that I think O1 style reasoning might be actually pushing that direction downwards. In my opinion, this is my partial hot take is that if, let's say you have a super big model, And let's say you have a 70B model that may take double the tokens, but gets the same result.[00:39:51] Eugene Cheah: Strictly speaking, a 70B, and this is even for transformer or non transformer, right? We we'll take less less resources than that 400 B [00:40:00] model, even if it did double the amount thinking. And if that's the case, and we are still all trying to figure this out, maybe the direction for us is really getting the sub 200 B to be as fast as efficient as possible.[00:40:11] Eugene Cheah: We a very efficient architecture that some folks happen to be working on to, to just reason it out over larger and larger context thing.[00:40:20] Question: Yeah. One thing I'm super interested in is. Models that can watch forever? Obviously you cannot train something on infinite context length. How are y'all thinking about that, where you run on a much longer context length than is possible to train on?[00:40:38] Dan Fu: Yeah, it's a, it's a great question. So I think when I think you guys probably had tweets along these lines, too. When we first started doing these things, because these are all recurrent models in theory you could just run it forever. You could just run it forever. And at the very least it won't, it won't like error out on your crash.[00:40:57] Dan Fu: There's another question of whether it can actually [00:41:00] use what it's seen in that infinite context. And I think there, so one place where probably the research and architectures ran faster Then another research is actually the benchmarks for long context. So you turn it on forever. You want to do everything or watch everything.[00:41:16] Dan Fu: What is it that you actually wanted to do? Can we actually build some benchmarks for that? Then measure what's happening. And then ask the question, can the models do it? Is there something else that they need? Yeah, I think that if I were to turn back the clock to 2022, that's probably one of the things I would have done differently, which would have been actually get some long context benchmarks out at the same time as we started pushing context length on all these models.[00:41:41] Eugene Cheah: I will also say the use case. So like, I think we both agree that there's no Infinite memory and the model needs to be able to learn and decide. I think what we have observed for, I think this also fits the state space model, is that one of the key advantages of this alternate attention mechanic that is not based on token position is that the model don't suddenly become crazy when you go past the [00:42:00] 8k training context tank, or a million context tank.[00:42:03] Eugene Cheah: It's actually still stable. It's still able to run, it's still able to rationalize. It just starts forgetting things. But some of these things are still there in latent memory. Some of these things are still somewhat there. That's the whole point of why reading twice works. Things like that. And one of the biggest pushes in this direction is that I think both Statespace and RWKB have Separate papers by other researchers where they use this architecture for time series data.[00:42:26] Eugene Cheah: Weather modeling. So, you are not asking what was the weather five days ago. You're asking what's the weather tomorrow based on the infinite length that we, as long as this Earth and the computer will keep running. So, so, and they found that it is like, better than existing, like, transformer or existing architecture in modeling this weather data.[00:42:47] Eugene Cheah: Control for the param size and stuff. I'm quite sure there are people with larger models. So, so there are things that, that in this case, right, there is future applications if your question is just what's next and not what's 10 years ago.[00:42:59] Dan Fu: Thanks so [00:43:00] much for having us. Get full access to Latent Space at www.latent.space/subscribe

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Satisfied Woman

Dreamvisions 7 Radio Network

Play Episode Listen Later Dec 20, 2024 57:26


The Satisfied Woman: Alanna Kaivalya, Ph.D. The 'Satisfied Woman' challenges the conventional wisdom that has long pushed women to model themselves after male-defined notions of success. For centuries, the female struggle for equality has meant fighting for the same opportunities as men to make their own choices, build independent lives, craft powerful careers, and exercise their agency. But has this been the wrong approach? Could striving to be "equal to men" actually be holding women back from realizing their fullest potential? Focusing too narrowly on integrating into masculine systems and hierarchies. women have forged paths laid out by men, emulating what men identify as desirable. These preconceived notions of what society has deemed standard have forced the modern woman to overlook the extraordinary power of developing and expressing her innate femininity. Could this be why many women remain unsatisfied and overwhelmed? Alanna Kaivalya, Ph.D. is an author, educator, speaker, and thought leader in the field of women's empowerment and femininity. She has written 5 books, developed international training programs, and taught audiences around the world. She has spent more than 20 years studying psychology, the human condition, the nature of the feminine and femininity, eastern spirituality, and earned a doctoral degree in Mythological Studies and Depth Psychology from Pacifica Graduate Institute in 2015. Her work centers on empowering women through a better understanding of femininity in the modern world. Her most recent book, The Way of the Satisfied Woman, helps women redefine femininity on their own terms and gives them clear pathways to become fully satisfied. https://www.thesatisfiedwoman.com/ Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
Choosing Love and Seeing Magic: Atousa Raissyan

11:11 Talk Radio

Play Episode Listen Later Dec 17, 2024 51:41


Guest: Atousa Raissyan Life is not about things being perfect but rather recognizing that life is perfect just as it is; and everything is a gift. Even with its challenges and discomfort, it is still perfect because everything is with purpose. Our purpose is love... and especially to love ourselves. It requires releasing the fear; moving more deeply into love. Allow yourself to flow with life in whatever manner it comes. Love, joy and peace are within you, and surround you. And... experience gratitude for all of it. My guest this week is Atoussa Raissyan. She wants you to become aware of your fear and how it is controlling your life. She wants to remind you that you are bigger than this so-called “reality.” In this week's episode, Atoussa will help you see that it is possibel to change your life. Atousa Raissyan, founder of Soulystic Healing Sanctuary, is recognized as a shaman, published author, heart-centered transformational healer, spiritual guide and teacher, digital artist, poet, inspirational speaker, life changer, and host of the podcast Goodbye Bullshit, Hello Happiness. Her latest book is: Change Yourself, Change the World. More info at: www.AtousaRaissyan.com 11:11 TALK RADIO with SIMRAN 11:11 Talk Radio dives deeply into conversations of truth, growth and wisdom to assit individuals in expanding personal growth, conscious living, humanity, and self-awareness.       SIMRAN is a multiple award-winning author, artist, speaker and mystic of love, compassion and humanity. She creates media, art, books, and online courses that bridge humanity's experience and expression of darkness and light.  Find out more at: www.iamsimran.com and www.1111mag.com  

Know What You See with Brian Lowery
A Guiding Light: Simran Jeet Singh on Faith and Practice

Know What You See with Brian Lowery

Play Episode Listen Later Dec 10, 2024 28:02


In this episode of Know What You See, Brian Lowery talks with with Simran Jeet Singh, professor, author, and host of the podcast, Wisdom and Practice, to explore the intersection of faith, purpose, and daily life. Simran shares how his Sikh faith guides his journey toward self-improvement, happiness, and meaningful community connections. Together, he and Brian discuss the role of religion as a practice—not just a belief system—and its potential to transform individuals and societies.For more on the show, visit knowwhatyousee.com.

11:11 Talk Radio
The Satisfied Woman: Alanna Kaivalya, Ph.D.

11:11 Talk Radio

Play Episode Listen Later Dec 10, 2024 60:00


The 'Satisfied Woman' challenges the conventional wisdom that has long pushed women to model themselves after male-defined notions of success. For centuries, the female struggle for equality has meant fighting for the same opportunities as men to make their own choices, build independent lives, craft powerful careers, and exercise their agency. But has this been the wrong approach? Could striving to be equal to men actually be holding women back from realizing their fullest potential? Focusing too narrowly on integrating into masculine systems and hierarchies. women have forged paths laid out by men, emulating what men identify as desirable. These preconceived notions of what society has deemed standard have forced the modern woman to overlook the extraordinary power of developing and expressing her innate femininity. Could this be why many women remain unsatisfied and overwhelmed?

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Aikido Teachings

Dreamvisions 7 Radio Network

Play Episode Listen Later Dec 6, 2024 55:32


Aikido Teachings of Robert Nadeau: Bob Noha A widely influential figure in the development of Aikido in America, Robert Nadeau is known as one of the few American direct disciples of Aikido's founder Morihei Ueshiba Osensei. Now an 8th dan Aikido master teacher, Nadeau has taught generations of students, and several have become prominent teachers in their own right. However, he has never written about his life or philosophy, always reserving his most pointed lessons for those who practice with him in person. Nadeau's core concepts, describe his simple-but-effective practices for personal development, and convey his time-tested approach to the inner training at the heart of Aikido in a very accessible way. Gain insight into some of these powerful teachings and how they influenced. Bob Noha, 6th Dan Bob began practicing Aikido in 1966 in Mountain View and shortly thereafter began training with Robert Nadeau Sensei, which started a lifelong friendship. Bob opened the first Aikido school in the Washington, D.C. area in 1970 and taught arrest/restraint tactics to U.S. Military Police at Andrews Air Force Base in 1974. Then, in 1975, he established the first Aikido school in Buffalo, New York. He founded Aikido of Petaluma in 1983 and continues to serve as its chief instructor. Bob traveled to Japan to further deepen his Aikido training in 1998, 1999, and 2006. In addition, he is also a devoted student and teacher of t'ai chi and has a background in several other martial arts. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
Signs as Your Personal Growth Path: SIMRAN

11:11 Talk Radio

Play Episode Listen Later Dec 3, 2024 60:00


You are here to reunite mind, body, and spirit by creating, experienc- ing, and expressing. This will always engage a beginning, middle, and end. It will often encompass a challenge that unfolds a deeper lesson. Your jour- ney to planet Earth was intended for experiential engagement. This would require both, the process of involution and that of evolution. There are parts on the inside and outside of every human being that are at war with the Self. It is the journey of victim, villain, and hero . . . the unknown, the question, the answer . . . living, being, and knowing. The greatest gift of the soul is to release fully, so that it can be all that experience offers in knowing its true expansiveness. The journey involves predetermined obstacles within the self and in the outside world. It is through being at war, in love, and of peace, the soul journey unfolds. The experience and integration of love, poured on every circumstance, brings the peace being sought. Love would have you discover that everything is loving and good, even those things that seem ugly, wrong, or bad. Love would show you the steps along the path. Love would never leave you or stop being your guide. But it requires you to be present and loving to the Self, so that you see, and can be seen, through the eyes of Love rather than distorted perceptions. This is the place that holds all illusion, reality, and truth.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Intuition & Inner Wisdom

Dreamvisions 7 Radio Network

Play Episode Listen Later Nov 29, 2024 55:52


Intuition & Inner Wisdom: Happy Ali The ability to thrive amidst a constant barrage of external input lies in our ability to connect with the innate power of our intuition. Each of us has a great resource within that is wired to our life's purpose, and it is waiting to guide us one step at a time. Hunches, precognition, dream analysis and building to direct communication with a multidimensional level of intelligence is always available. Ready to discover more about intuition and inner wisdom? Happy Ali is the author of The Intuition Bible: How and Why to Tap Into Your Inner Wisdom. With a BA degree in psychology from UCLA, he is a prophetic dreamer, certified master NLP practitioner, certified master clinical hypnotherapist, and host of the Happy Insights podcast. Happy's spiritual journey began in 1995 when he dedicated his life to the exploration of metaphysical disciplines after a near-death experience inspired a dramatic awakening. Now, Happy teaches and shares the Subconscious Manifestation Methodology he created globally. www.HappyInsights.net. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
Aikido Teachings of Robert Nadeau: Bob Noha

11:11 Talk Radio

Play Episode Listen Later Nov 26, 2024 60:00


A widely influential figure in the development of Aikido in America, Robert Nadeau is known as one of the few American direct disciples of Aikido's founder Morihei Ueshiba Osensei. Now an 8th dan Aikido master teacher, Nadeau has taught generations of students, and several have become prominent teachers in their own right. However, he has never written about his life or philosophy, always reserving his most pointed lessons for those who practice with him in person. Nadeau's core concepts, describe his simple-but-effective practices for personal development, and convey his time-tested approach to the inner training at the heart of Aikido in a very accessible way. Gain insight into some of these powerful teachings and how they influenced.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Relationships & Astrology

Dreamvisions 7 Radio Network

Play Episode Listen Later Nov 22, 2024 54:36


Relationships & Astrology: Kate Rose Wouldn't you like a cheat sheet telling you whether a relationship is meant to last? Bestselling author Kate Rose gives you the insight you need to differentiate between soulmate, twin flame, and karmic relationships. Using an astrological birth chart — a cosmic fingerprint — you can see not just personality traits but also the wounds and lessons, specifically in love, you will encounter and learn from in this lifetime. Discover the true nature of soulmate relationships (contrary to popular belief, they do not yield the highest form of love) as well as the essential lessons to be learned in karmic relationships and the incomparable fulfillment of merging with your twin flame. Kate Rose is the author of Written in the Stars: The Astrology of Soulmate, Karmic, and Twin Flame Relationships and You Only Fall in Love Three Times. As a therapist, relationship expert, spiritual intuitive and astrologer, Kate uses the stars to help her clients clarify their purpose and recognize the blocks that are keeping them from living the life they are destined to live. She holds an MS in clinical art therapy from Springfield College and writes regular columns for YourTango, Elephant Journal, and her newsletter, Unedited. www.WordsOfKateRose.com Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
Intuition & Inner Wisdom: Happy Ali

11:11 Talk Radio

Play Episode Listen Later Nov 19, 2024 60:00


The ability to thrive amidst a constant barrage of external input lies in our ability to connect with the innate power of our intuition. Each of us has a great resource within that is wired to our life's purpose, and it is waiting to guide us one step at a time. Hunches, precognition, dream analysis and building to direct communication with a multidimensional level of intelligence is always available. Ready to discover more about intuition and inner wisdom?

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Dr. Elson Haas

Dreamvisions 7 Radio Network

Play Episode Listen Later Nov 15, 2024 56:47


New Medicine: Dr. Elson Haas The health of our young people is particularly important because bad habits learned early can have disastrous short-term and long-term consequences, as we see with the alarming rates of childhood obesity and early onset diabetes. Therefore a key passion is to teach young people to caring for the one and only body they each possess, with love and respect. Teaching children about how their bodies work and how to take care of themselves by making nourishing choices and learning the difference between real food and treats is critically important work for each of us as individuals and as a society. Integrative Family Physician 40 Years. Author of 11 health books Dr. Haas approaches diagnosis and treatment with the maxim “Lifestyle first, Natural therapies next, drugs or surgery last.” His unique integration of Natural, Eastern and Western medical approaches, which he calls NEW Medicine, is built on restoring proper body physiology, and supporting healthy cells and tissues in order to promote healing. It recognizes proper nutrition as the cornerstone of health and also includes elimination diets and detoxification practices, often with a seasonal focus. Dr. Haas has practiced this philosophy for more than 40 years in his work as an integrative family physician, incorporating these practices within an insurance model at his clinic, the Preventive Medical Center of Marin in San Rafael, CA, where he and an extensive team of practitioners and staff have provided care to their patients since 1984. The origin of the word doctor is the Latin “docere” meaning “to teach” and Elson believes that health education is crucial in helping people become proactive in their journey towards optimal health. He is the author of 11 books on various aspects of integrative healing including the classic Staying Healthy with the Seasons, as well as Staying Healthy with Nutrition, The Detox Diet, The False Fat Diet, Ultimate Immunity and his latest, Staying Healthy with NEW Medicine. Learn more about Simran here: www.iamsimran.com www.1111mag.com/

Khandaan- A Bollywood Podcast
Ep 253- Citadel Honey Bunny And Do Patti Reviews

Khandaan- A Bollywood Podcast

Play Episode Listen Later Nov 13, 2024 67:03


Welcome to Khandaan: A Bollywood Podcast where this week we're discussing CITADEL: HONEY BUNNY on Amazon Prime.  Directed by Raj and DK, Honey Bunny stars Samantha Ruth Prabhu and Varun Dhawan along with Kay Kay, Simran and Saqib Saleem. We talk about what this show does better than the original and what really stood out to us.  We also discuss DO PATTI on Netflix. Produced by & starring Kriti Sanon and co-produced & written by Kanika Dhillon, this thriller is rather less than thrilling. Co-starring a self conscious Kajol, this movie is mostly a vehicle for Kriti and nothing more. Shownotes: This week's Bonus Episode is available for Patreon Only: Patreon Bonus- US Election trauma, Biswa Kalyan Rath, Sweet Bobby, Dil To Pagal Hai Check out our Video Essay on our YouTube Channel: "Kareena Kapoor Khan: The Last of the Bollywood Divas" Do leave a comment and subscribe, we would love to do more video essay's! Show notes: Mamata Kulkarni: A Siren in a Scandal Log Mujhe Kehte Hai | Parvati Khan | Pratikar Follow us on Socials: Amrita, Sujoy, Asim YouTube, Facebook, Instagram, Tik Tok Sujoy's Instagram  Amrita's YouTube Book Channel- Amrita By The Book You can listen to Khandaan- A Bollywood Podcast episodes on the following apps: Apple Podcast Spotify Jio Saavn Deezer Audible Amazon Music Omny iHeart TuneIn  

11:11 Talk Radio
Relationships & Astrology: Kate Rose

11:11 Talk Radio

Play Episode Listen Later Nov 12, 2024 60:00


Wouldn't you like a cheat sheet telling you whether a relationship is meant to last? Bestselling author Kate Rose gives you the insight you need to differentiate between soulmate, twin flame, and karmic relationships. Using an astrological birth chart — a cosmic fingerprint — you can see not just personality traits but also the wounds and lessons, specifically in love, you will encounter and learn from in this lifetime. Discover the true nature of soulmate relationships (contrary to popular belief, they do not yield the highest form of love) as well as the essential lessons to be learned in karmic relationships and the incomparable fulfillment of merging with your twin flame.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Your New Reality

Dreamvisions 7 Radio Network

Play Episode Listen Later Nov 8, 2024 56:56


Your New Reality: Maureen St. Germain There is a new, amazing game ahead of us. This transformation is a mental and physical exercise that can only be driven by spiritual willingness to shift and change. Your wake-up call helped you recognize your situation. Get ready to race to . . . not the finish line, but the starting block! Live Your Best 5D Life! A lifelong interest in the Akashic Records resulted in her being granted access to this dimension that has been off limits to most of humanity for millions of years. Founder of ARI - Akashic Records International, Maureen is an extremely accurate Akashic Records Guide and instructor. Maureen is the founder of the St. Germain Mystery School, the Ascension Institute, and Founders Circle. An internationally recognized teacher and intuitive, she is also the creator of the app Illuminate which is rich with guided meditations, tunings, chants and activations. Living Your Best 5D Life is Maureen St Germain's 8th book, and the 3rd in the 5D series following the international best-sellers, Waking Up in 5D and Mastering Your 5D Self. She currently lives near Sedona, Arizona, and offers workshops worldwide. Website: https://practicalmystic.komi.io/ Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
New Medicine: Dr. Elson Haas

11:11 Talk Radio

Play Episode Listen Later Nov 5, 2024 60:00


The health of our young people is particularly important because bad habits learned early can have disastrous short-term and long-term consequences, as we see with the alarming rates of childhood obesity and early onset diabetes. Therefore a key passion is to teach young people to caring for the one and only body they each possess, with love and respect. Teaching children about how their bodies work and how to take care of themselves by making nourishing choices and learning the difference between real food and treats is critically important work for each of us as individuals and as a society.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: Achieve Enlightenment

Dreamvisions 7 Radio Network

Play Episode Listen Later Nov 1, 2024 56:02


Achieve Enlightenment: Master Del Pe Meditation, prayers, yoga, vegetarianism and spiritual practices alone can be a waste of your time, resources and effort. They are only pieces of the puzzle which are not assembled to complete the enlightenment formula. You need a curriculum of self-mastery to achieve enlightenment in this lifetime. Don't leave this life merely rich, famous or successful. Travel and cross crystallizing borders of religiosity and aging spirituality... override the humps of contemporary ignorance ... escape the ditch of modern preoccupations... transport your consciousness far away, beyond the geography of obsolescence ... and expect to arrive at the doorstep of enlightenment. Depart enlightened. A Modern Sage and Enlightened Master, who brings the alchemy of eastern wisdom and western knowledge, MASTER DEL PE is a guru who wears many hats. Having healed and taught over 400,000 students and clients during his travels to more than 100 countries around the world, Master Del Pe is also a Miraculous Healer, Esoteric Scientist, Esoteric Psychologist, Divine Alchemist, a world-expert of the 8 types of yoga and the 12 styles of meditation as well as a true pioneer in Spiritual Technology. He founded the BElife Institute for Higher Consciousness (BIHC), the Wisdom Institute for Leadership and Global Advancement (WILGA) and World Institute for Incurable Diseases (WIID) with his 4 decades (and many lifetimes) of experience on the Enlightened Life Path. An author of 12 books, and a Mentor-Healer to several billionaires, Fortune 500 companies, world leaders and advanced soul servers, Master Del Pe welcomes students to his 200+ online courses and Enlightened Life Retreats online and in-person at his MDP Village Retreat Resort in the Philippines. His vision is to train humanity to master life ahead of its time. www.masterdelpe.com Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
Your New Reality: Maureen St. Germain

11:11 Talk Radio

Play Episode Listen Later Oct 29, 2024 60:00


There is a new, amazing game ahead of us. This transformation is a mental and physical exercise that can only be driven by spiritual willingness to shift and change. Your wake-up call helped you recognize your situation. Get ready to race to . . . not the finish line, but the starting block! Live Your Best 5D Life!

11:11 Talk Radio
Achieve Enlightenment: Master Del Pe

11:11 Talk Radio

Play Episode Listen Later Oct 22, 2024 60:00


Meditation, prayers, yoga, vegetarianism and spiritual practices alone can be a waste of your time, resources and effort. They are only pieces of the puzzle which are not assembled to complete the enlightenment formula. You need a curriculum of self-mastery to achieve enlightenment in this lifetime. Don't leave this life merely rich, famous or successful. Travel and cross crystallizing borders of religiosity and aging spirituality... override the humps of contemporary ignorance ... escape the ditch of modern preoccupations... transport your consciousness far away, beyond the geography of obsolescence ... and expect to arrive at the doorstep of enlightenment. Depart enlightened.

Sofia Unfiltered
The BRCA1 Mutation and How it Affects Your Breast Cancer Risk with Dr. Simran Malhotra EP 48

Sofia Unfiltered

Play Episode Listen Later Oct 21, 2024 50:40


Dr. Simran Malhotra shares her journey as a BRCA1 previvor and triple board-certified physician, offering insights into proactive cancer prevention and genetic testing. Tune in for practical tips on balancing hormone replacement therapy and making lasting lifestyle changes to enhance well-being and longevity.In this episode we chat about:A Physician's Journey with BRCA1Preparing for Genetic Testing: What You Need to KnowThe Power of Choice in Cancer PreventionBalancing HRT and Lifestyle MedicineRealistic Goals for Lasting Lifestyle ChangeEpisode Resources:Dr. Malhotra's Instagram (Instagram)Dr. Malhotra's website (website)Book Lifestyle Medicine Physicians at Sofia Health (book session)Book Tai Chi, Qigong, meditation, mindfulness, and Yoga classes with Prime (get free trial)Thank you so much for tuning in! If you enjoyed the content, we would love it if you took 2 minutes to leave a 5-star review! The Sofia Unfiltered Podcast by Sofia Health is for general informational and entertainment purposes only and does not constitute the practice of medicine, nursing or other professional health care services, including the giving of medical advice. No doctor/patient relationship is formed. The use of information on this podcast or materials linked from this podcast is at the user's own risk. The content of this podcast is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard or delay in obtaining medical advice for any medical condition they may have. For any health concerns, users should seek the assistance of their health care professionals.

Dreamvisions 7 Radio Network
11:11 Talk Radio with Simran Singh: The Medicine Wheel

Dreamvisions 7 Radio Network

Play Episode Listen Later Oct 18, 2024 56:33


Eight Directions of The Medicine Wheel Guest: Carlos Philip Glover Our personal transformation serves the collective healing, This is a journey of self-empowerment around the Medicine Wheel of Earth Wisdom - to ignite your spirit fire, awaken new levels of consciousness, and inspire you to meet the outer challenges through deep inner work. It opens aspects of your innate Earth Wisdom, including your creative, sensory, emotional, and intuitive intelligences, the wisdom of your body-mind and heart-knowing. Illustrated with heartful stories and inspiring images, this is a great read that's full of engaging humour, lyrical story-telling and practices for the next step of human evolution. Carlos is a carrier of ancient Earth Wisdom, and the author of Earth Wisdom Teachings. He is a teacher of these old ways, working in England, Spain and Denmark. He is also a Drum Dance Chief and Vision Quest guide. After travelling widely he explored various spiritual traditions and has been working in the field of consciousness expansion ever since. www.carlosphilipglover.com Learn more about Simran here: www.iamsimran.com www.1111mag.com/

11:11 Talk Radio
The Power of Words: Dr. Christina Donnell

11:11 Talk Radio

Play Episode Listen Later Oct 15, 2024 60:00


A series of mystifying moments occasioned by words that lit up before her eyes had longtime psychologist and teacher Christina Donnell profoundly engaged. As her sensory impressions of this “living language” deepened within her, she noticed her thoughts giving way to direct perception and her inner sensations yielding to a feeling of universal perception, transporting her to a level of awareness far beyond normal. No longer was there a self separate from other; infinity and eternity had become physical realities; and more.

11:11 Talk Radio
Eight Directions of The Medicine Wheel: Carlos Philip Glover

11:11 Talk Radio

Play Episode Listen Later Oct 8, 2024 60:00


Our personal transformation serves the collective healing, This is a journey of self-empowerment around the Medicine Wheel of Earth Wisdom - to ignite your spirit fire, awaken new levels of consciousness, and inspire you to meet the outer challenges through deep inner work. It opens aspects of your innate Earth Wisdom, including your creative, sensory, emotional, and intuitive intelligences, the wisdom of your body-mind and heart-knowing. Illustrated with heartful stories and inspiring images, this is a great read that's full of engaging humour, lyrical story-telling and practices for the next step of human evolution.

11:11 Talk Radio
Savoring Aging: Kamla Kapur

11:11 Talk Radio

Play Episode Listen Later Oct 1, 2024 60:00


This is a clarion call to the aging to awaken before they die, embark on the adventure of self-discovery, become warriors on the spiritual path to embrace and ensure safe passage with the ultimate triumph of conscious living and dying. The privilege of aging is to experience with engagement this precious, painful life and to achieve vitality, satisfaction, joy in life we are fortunate to still have. Learn the art of resting, happiness, letting go, facing death to live in its light with greater intensity, enthusiasm, passion, and anchoring in a spirituality that transcends death.

11:11 Talk Radio
Becoming Fierce: Stephanie James

11:11 Talk Radio

Play Episode Listen Later Sep 24, 2024 60:00


Embody the powerful, passionate, fiery energy that is the authentic expression of you. Love beyond external circumstances, live fully in the present moment, and fully embody the spark that is you, letting it shine the way for generations to come. Life is inviting us to stop focusing on everything that is going on outside of us, to stop believing the fear narrative, and to let go of all the things that distract us from cultivating the resilience, joy, and grit it takes to live a bold, beautiful, and deeply meaningful life. We can let go of the focus on the outer world dictating our sense of well-being. The point of power is in the present moment. You get to choose what you want to focus on. This is an inside job and the time to begin is now. It is time to become fierce.

Go(o)d Mornings with CurlyNikki
Affirm: God is Aware of My Situation #GMWeekends

Go(o)d Mornings with CurlyNikki

Play Episode Listen Later Jul 21, 2024 11:48


R E L A X God is circling around you, that temple, that tower. He's aware of your situation, of every thought, every fear. You had gone astray, lost in what you're not, and who you're not, but you're back. How does it feel to know who you are? To know whose you are? Shift into this felt-Knowing, this Silence. Stop leaving, and expect miracles. I Love you, Nik Support the show: ▶▶https://www.patreon.com/goodmornings __________________________________________ Today's Quotes:    **My favorite icon of Jesus (on my nightstand, the pic I went to lay the communion in front of! )- https://en.wikipedia.org/wiki/Christ_Pantocrator_%28Sinai%2 **My favorite Maharajji picture (on my stand)- https://maharajji.love/files/images/Maharajji_solo_color/#/view/ID216434 "Hidden power" - Baba Ram Singh ji "A hand unseen is behind this creation, and every pair of eyes is looking towards that hand which is not seen.  Wherever one looks, one is looking for that hand which is behind and unseen. You look at a flower and appreciate the flower and behind the flower, there's that beauty, that truth which could bring this beauty. If this flower does not remind you of the eternal beauty, of the eternal truth, then it has not fulfilled its purpose. So behind every activity, pleasant, unpleasant, chaotic, harmonious, is one Divinity." -Sri Sri Ravi Shankar "All I know and all I can say is, God is preparing me for something. I am necessary." - via IG @Ceexnotes "One of the proofs that our Simran is being done correctly is that, in any kind of difficulty, we first remember Simran." -Baba Ram Singh "I woke up and decided I will not stress over things that are out of my control.  People will be people.  Jobs will be jobs. But one thing I know for sure is... God will be God and... He's got me." -via IG @amazinggracecoffe "The only comfort we can find is by saying: "Allah is aware of my situation."" -via IG @noorsayingss "God will speak clearly, guide deliberately, and liberate completely as you know and experience his heart more fully." -via IG @NeilVermillion "I am circling around God, around the ancient tower, and I have been circling for a thousand years, and I still con't know if lam a falcon, or a storm, or a great song. -Rilke "Trusting the reality of reality itself." -Neil Douglas Klotz "Anything CAN happen BECAUSE He is the God of miracles." -Via IG @letsgrowsisters