Podcasts about Jina

  • 255PODCASTS
  • 524EPISODES
  • 41mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Jina

Latest podcast episodes about Jina

Religiously Literate
15. What is Jainism?

Religiously Literate

Play Episode Listen Later May 21, 2025 36:26


SHOW NOTES: Definitions:Jiva (soul): essence of every living thing, eternal and consciousKevala: omniscience, absolute knowledge of reality; enlightenment that terminates samsaraDigambara: one of the main Jain sects; characterized by monastic practice of nudityShvetambara: larger of the two main Jain sects; characterized by monastic practice of wearing white Jina: "conqueror" or one who has overcome samsaraTirthankara: a "bridge builder" or "maker of the ford"; one who has crossed over from the captivity of samsara to the world of enlightenment and freedom from rebirthFive Great VowsDo nor harm any living thing.Speak the truth.Do not steal.Be chaste.Renounce all possessions.Vaughn, Lewis. Anthology of world religions: sacred texts and contemporary perspectives. Oxford University Press, 2017.Partridge, Christopher, and Tim Dowley. A Brief Introduction to Jainism and Sikhism. 1st ed. Vol. 5 of Brief Introductions to World Religions. Minneapolis, Minnesota: Fortress Press, 2019. Copy the citation to clipboardGough, Ellen. “Jainism: An Introduction - By Jeffery D. Long.” Religious Studies Review 36, no. 1 (2010): 97. Copy the citation to clipboard

New Books in Critical Theory
Jina B. Kim, "Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing" (Duke UP, 2025)

New Books in Critical Theory

Play Episode Listen Later Apr 18, 2025 53:27


In Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing (Duke UP, 2025), Jina B. Kim develops what she calls crip-of-color critique, bringing a disability lens to bear on feminist- and queer-of-color literature in the aftermath of 1996 US welfare reform and the subsequent evisceration of social safety nets. She examines literature by contemporary feminist, queer, and disabled writers of color such as Jesmyn Ward, Octavia Butler, Karen Tei Yamashita, Samuel Delany, and Aurora Levins Morales, who each bring disability and dependency to the forefront of their literary freedom dreaming. Kim shows that in their writing, liberation does not take the shape of the unfettered individual or hinge on achieving independence. Instead, liberation emerges by recuperating dependency, cultivating radical interdependency, and recognizing the numerous support systems upon which survival depends. At the same time, Kim demonstrates how theories and narratives of disability can intervene into state-authored myths of resource parasitism, such as the welfare queen. In so doing, she highlights the alternate structures of care these writers envision and their dreams of life organized around reciprocity and mutual support. Duke University Press Scholars of Color First Book Award Jina B. Kim is Assistant Professor of English and the Study of Women, Gender, and Sexuality at Smith College. Kim is a scholar, writer, and educator of feminist disability studies, queer-of-color critique, and contemporary multi-ethnic U.S. literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/critical-theory

New Books Network
Jina B. Kim, "Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing" (Duke UP, 2025)

New Books Network

Play Episode Listen Later Apr 12, 2025 53:27


In Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing (Duke UP, 2025), Jina B. Kim develops what she calls crip-of-color critique, bringing a disability lens to bear on feminist- and queer-of-color literature in the aftermath of 1996 US welfare reform and the subsequent evisceration of social safety nets. She examines literature by contemporary feminist, queer, and disabled writers of color such as Jesmyn Ward, Octavia Butler, Karen Tei Yamashita, Samuel Delany, and Aurora Levins Morales, who each bring disability and dependency to the forefront of their literary freedom dreaming. Kim shows that in their writing, liberation does not take the shape of the unfettered individual or hinge on achieving independence. Instead, liberation emerges by recuperating dependency, cultivating radical interdependency, and recognizing the numerous support systems upon which survival depends. At the same time, Kim demonstrates how theories and narratives of disability can intervene into state-authored myths of resource parasitism, such as the welfare queen. In so doing, she highlights the alternate structures of care these writers envision and their dreams of life organized around reciprocity and mutual support. Duke University Press Scholars of Color First Book Award Jina B. Kim is Assistant Professor of English and the Study of Women, Gender, and Sexuality at Smith College. Kim is a scholar, writer, and educator of feminist disability studies, queer-of-color critique, and contemporary multi-ethnic U.S. literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Literary Studies
Jina B. Kim, "Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing" (Duke UP, 2025)

New Books in Literary Studies

Play Episode Listen Later Apr 12, 2025 53:27


In Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing (Duke UP, 2025), Jina B. Kim develops what she calls crip-of-color critique, bringing a disability lens to bear on feminist- and queer-of-color literature in the aftermath of 1996 US welfare reform and the subsequent evisceration of social safety nets. She examines literature by contemporary feminist, queer, and disabled writers of color such as Jesmyn Ward, Octavia Butler, Karen Tei Yamashita, Samuel Delany, and Aurora Levins Morales, who each bring disability and dependency to the forefront of their literary freedom dreaming. Kim shows that in their writing, liberation does not take the shape of the unfettered individual or hinge on achieving independence. Instead, liberation emerges by recuperating dependency, cultivating radical interdependency, and recognizing the numerous support systems upon which survival depends. At the same time, Kim demonstrates how theories and narratives of disability can intervene into state-authored myths of resource parasitism, such as the welfare queen. In so doing, she highlights the alternate structures of care these writers envision and their dreams of life organized around reciprocity and mutual support. Duke University Press Scholars of Color First Book Award Jina B. Kim is Assistant Professor of English and the Study of Women, Gender, and Sexuality at Smith College. Kim is a scholar, writer, and educator of feminist disability studies, queer-of-color critique, and contemporary multi-ethnic U.S. literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literary-studies

New Books in LGBTQ+ Studies
Jina B. Kim, "Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing" (Duke UP, 2025)

New Books in LGBTQ+ Studies

Play Episode Listen Later Apr 12, 2025 53:27


In Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing (Duke UP, 2025), Jina B. Kim develops what she calls crip-of-color critique, bringing a disability lens to bear on feminist- and queer-of-color literature in the aftermath of 1996 US welfare reform and the subsequent evisceration of social safety nets. She examines literature by contemporary feminist, queer, and disabled writers of color such as Jesmyn Ward, Octavia Butler, Karen Tei Yamashita, Samuel Delany, and Aurora Levins Morales, who each bring disability and dependency to the forefront of their literary freedom dreaming. Kim shows that in their writing, liberation does not take the shape of the unfettered individual or hinge on achieving independence. Instead, liberation emerges by recuperating dependency, cultivating radical interdependency, and recognizing the numerous support systems upon which survival depends. At the same time, Kim demonstrates how theories and narratives of disability can intervene into state-authored myths of resource parasitism, such as the welfare queen. In so doing, she highlights the alternate structures of care these writers envision and their dreams of life organized around reciprocity and mutual support. Duke University Press Scholars of Color First Book Award Jina B. Kim is Assistant Professor of English and the Study of Women, Gender, and Sexuality at Smith College. Kim is a scholar, writer, and educator of feminist disability studies, queer-of-color critique, and contemporary multi-ethnic U.S. literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/lgbtq-studies

New Books in Disability Studies
Jina B. Kim, "Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing" (Duke UP, 2025)

New Books in Disability Studies

Play Episode Listen Later Apr 12, 2025 53:27


In Care at the End of the World: Dreaming of Infrastructure in Crip-Of-Color Writing (Duke UP, 2025), Jina B. Kim develops what she calls crip-of-color critique, bringing a disability lens to bear on feminist- and queer-of-color literature in the aftermath of 1996 US welfare reform and the subsequent evisceration of social safety nets. She examines literature by contemporary feminist, queer, and disabled writers of color such as Jesmyn Ward, Octavia Butler, Karen Tei Yamashita, Samuel Delany, and Aurora Levins Morales, who each bring disability and dependency to the forefront of their literary freedom dreaming. Kim shows that in their writing, liberation does not take the shape of the unfettered individual or hinge on achieving independence. Instead, liberation emerges by recuperating dependency, cultivating radical interdependency, and recognizing the numerous support systems upon which survival depends. At the same time, Kim demonstrates how theories and narratives of disability can intervene into state-authored myths of resource parasitism, such as the welfare queen. In so doing, she highlights the alternate structures of care these writers envision and their dreams of life organized around reciprocity and mutual support. Duke University Press Scholars of Color First Book Award Jina B. Kim is Assistant Professor of English and the Study of Women, Gender, and Sexuality at Smith College. Kim is a scholar, writer, and educator of feminist disability studies, queer-of-color critique, and contemporary multi-ethnic U.S. literature. Learn more about your ad choices. Visit megaphone.fm/adchoices

Fluent Fiction - Korean
Sisterly Bonds Renewed in the Heart of Seoul's Spring

Fluent Fiction - Korean

Play Episode Listen Later Mar 30, 2025 15:29


Fluent Fiction - Korean: Sisterly Bonds Renewed in the Heart of Seoul's Spring Find the full episode transcript, vocabulary words, and more:fluentfiction.com/ko/episode/2025-03-30-22-34-01-ko Story Transcript:Ko: 서울의 밝은 봄날, 수민과 진아는 종종 시간을 보낸 적이 없는 두 자매였다.En: On a bright spring day in Seoul, Sumin and Jina were two sisters who rarely spent time together.Ko: 하늘이 맑고 따듯한 바람이 불어오는 이 날, 그들은 남산 서울 타워를 찾았다.En: On this day, the sky was clear, and a warm breeze was blowing, so they decided to visit Namsan Seoul Tower.Ko: 남산의 벚꽃이 만개하여 진아의 기분이 두근거렸다.En: The cherry blossoms at Namsan were in full bloom, making Jina's heart flutter.Ko: 그러나 수민의 표정은 여전히 일에 쫓기고 있었다.En: However, Sumin's expression showed she was still occupied with work.Ko: 남산 타워 아래 도착하자, 진아는 하늘을 보며 미소 지었다.En: Upon arriving at the base of Namsan Tower, Jina looked up at the sky and smiled.Ko: "언니, 여기 와줘서 고마워." 하지만 그 순간, 수민의 전화가 울렸다.En: "Thanks for coming, big sister." But at that moment, Sumin's phone rang.Ko: 그녀는 잠시 고민하다가 전화를 받았다.En: She hesitated for a moment before answering the call.Ko: 진아는 한숨을 쉬고 벚꽃을 바라보았다.En: Jina sighed and gazed at the cherry blossoms.Ko: "일 정말 바빠?" 진아가 조심스럽게 물었다.En: "Is work really busy?" Jina asked cautiously.Ko: "응, 조금만 기다려. 금방 끝날게," 수민은 대답하였고, 진아는 고개를 끄덕였다.En: "Yes, just wait a bit. I'll be done soon," replied Sumin, and Jina nodded.Ko: 둘은 케이블카를 타고 올라가며 말없이 서울을 바라봤다.En: They rode the cable car up in silence, looking out over Seoul.Ko: 높이 올라갈수록, 멋진 도시의 전경이 눈앞에 펼쳐졌다.En: As they ascended, a stunning view of the city unfolded before them.Ko: 타워의 전망대에 도착했을 때, 수민은 깊은 숨을 내쉬었다.En: When they reached the observation deck of the tower, Sumin took a deep breath.Ko: "전화 끌게," 그녀는 전화기를 껐다.En: "I'll turn off my phone," she said as she switched it off.Ko: "오늘은 완전히 너랑 함께할게."En: "Today, I'm completely here with you."Ko: 진아는 조금 놀라 웃었다.En: Jina was a bit surprised and smiled.Ko: "정말? 그럼 이제 이야기해도 될까?"En: "Really? Then can I talk now?"Ko: "물론이야," 수민은 진지하게 고개를 끄덕였다.En: "Of course," Sumin nodded earnestly.Ko: 시야가 뛰어난 전망대에서, 두 자매는 조용히 앉아 도심을 바라봤다.En: From the observation deck with its excellent view, the two sisters sat quietly looking over the city.Ko: 진아는 마음을 다잡고 수민에게 말했다. "언니, 사실 난 언니가 많이 그리웠어. 자주 못 봐서 외로웠어."En: Jina composed herself and said to Sumin, "Sister, the truth is, I missed you a lot. I felt lonely because we rarely see each other."Ko: 수민은 잠시 침묵을 지켰다.En: Sumin remained silent for a moment.Ko: "미안해, 진아. 내가 알면서도 일이 항상 우선이었어. 이제부터는 너와의 시간을 더 소중하게 여길게."En: "I'm sorry, Jina. Even though I knew, work was always my priority. From now on, I'll treasure our time together more."Ko: 그렇게 두 자매는 서로를 마주 보며 웃었다.En: And so, the two sisters smiled at each other.Ko: 약간은 어색했지만, 그들의 마음에는 갑자기 평화가 찾아왔다.En: It was slightly awkward, but suddenly a sense of peace found its way into their hearts.Ko: 수민은 진아의 손을 잡고 말했다. "앞으로는 자주 보자. 그리고 네 마음, 언제든 얘기해 줘."En: Sumin held Jina's hand and said, "Let's see each other often from now on. And tell me what's on your mind anytime."Ko: "응, 그럴게," 진아는 밝게 웃으며 대답했다.En: "Yes, I will," Jina replied, smiling brightly.Ko: 두 자매는 그렇게 남산 타워에서 바라본 서울의 모습처럼 새로운 시작을 다짐했다.En: The two sisters vowed for a new beginning, like the view of Seoul from Namsan Tower.Ko: 따뜻한 햇살과 함께, 그들의 관계도 한층 더 따뜻해졌다.En: Along with the warm sunlight, their relationship grew warmer.Ko: 이 날 이후로, 수민은 가족과의 시간을 소중하게 여겼고, 진아는 솔직하게 자신의 마음을 표현할 수 있게 되었다.En: After that day, Sumin cherished time with her family more, and Jina was able to express her feelings honestly.Ko: 봄바람이 불어오는 날, 남산에서의 하루가 두 사람의 마음속에 깊이 남았다.En: The day at Namsan, with the spring breeze blowing, remained deeply engraved in their hearts. Vocabulary Words:bright: 밝은rarely: 종종breeze: 바람blossom: 벚꽃flutter: 두근거리다occupied: 쫓기다hesitate: 고민하다cautiously: 조심스럽게ascended: 올라가다unfolded: 펼쳐지다observation deck: 전망대earnestly: 진지하게compose: 다잡다treasure: 소중하게 여기다slightly: 약간peace: 평화engulf: 깊이 남다silence: 침묵awkward: 어색하다vow: 다짐하다engraved: 깊이 남다sense: 느낌entirely: 완전히switch off: 끄다honestly: 솔직하게priority: 우선solitude: 외로움reconciliation: 화해cherished: 소중하게 여김gazed: 바라보다

Radio Maria Tanzania
Fahamu Mazingira namna linavyo tumika Jina la Mungu?

Radio Maria Tanzania

Play Episode Listen Later Feb 20, 2025 28:31


Karibu katika Kipindi cha Maswali yahusuyo Imani ukiwa nami Frateri Ayuto Kongwa Kulolwa  Kutoka Seminari ya Mtakatifu Augustino Peramiho, Jimbo Katoliki Songea nikijibu swali la Msikilizaji linalosema Usitaje bure jina la Mungu na wale wanao apa wanalitumia vyema? L'articolo Fahamu Mazingira namna linavyo tumika Jina la Mungu? proviene da Radio Maria.

Press Pause with Jouhayna
Empowerment, Independence, & Overcoming Overthinking - Burj Banter With Samara Iqbal

Press Pause with Jouhayna

Play Episode Listen Later Jan 3, 2025 23:10


Empowerment, Independence, & Overcoming Overthinking - Burj Banter with Samara Iqbal #independence #empowerment #overthinking Hi Everyone! Welcome to the Press Pause with Jouhayna In this episode of Burj Banter on Press Pause with Jina, we welcome Samara Iqbal as she shares insightful life advice on independence and empowerment. Please visit my website to get more information: https://www.jouhaynaalmheiri.com/press-pause-with-jouhayna Samara opens up about the advice she received from her parents about standing on her own two feet, especially as a woman, and why being self-reliant is essential. We also discuss the struggles of overthinking, managing life's challenges, and how spirituality and prayer help Samara cope with stress. Samara, the eldest sibling, talks about the responsibilities she shouldered growing up and how she worked hard to protect her younger sisters from the difficulties she faced. Throughout the episode, Samara highlights the importance of being selective about who you trust, as well as how empowerment and independence are key to personal growth.

Podcast UMN Radio
Coincidence - URD #5

Podcast UMN Radio

Play Episode Listen Later Dec 19, 2024 14:46


Berawal dari sebuah ketidaksengajaan, hingga takdir mempertemukan Jina dan Evan kembali lagi di kantor sebagai rekan kerja

Alfajiri - Voice of America
Kundi jipya la wanamgambo kwa jina Lakurawas, laibuka kaskazini magharibi mwa Nigeria - Novemba 22, 2024

Alfajiri - Voice of America

Play Episode Listen Later Nov 22, 2024 29:59


Matangazo ya nusu saa kuhusu habari za mapema asubuhi pamoja na habari za michezo.

Scaling New Heights Podcast: Cutting Edge Training For Small Business Advisors
Episode 97 - Interview with Jina Etienne - The Woodard Report Podcast

Scaling New Heights Podcast: Cutting Edge Training For Small Business Advisors

Play Episode Listen Later Oct 16, 2024 42:26


On this show, Joe Woodard speaks with Jina Etienne about reimagining and redefining Diversity, Equity, and Inclusion (DEI), as well as exploring its broader implications beyond common perceptions.  Jina is CEO of Etienne Consulting, LLC. She is a consultant, coach, facilitator, and speaker on inclusivity and belonging. Jina's work can be found at Etienne Consulting where her mission is to shatter unseen barriers. Additional work from Jina: 2024 season of D&I One-on-One CPA Trendlines Podcast on decoupling DEIB "A Holistic Approach to Diversity" Podcast on ADHD called "The Distracted Elephant" Thank you to our show sponsor! Rightworks — All your accounting apps, unified in the cloud Learn more about the show and our sponsors at Woodard.com/podcast

Habari za UN
Asante WFP sasa najua kuandika na kusoma jina langu – Mkimbizi DRC

Habari za UN

Play Episode Listen Later Oct 16, 2024 3:34


Mbali na usaidizi mkubwa kwa watu waliokimbia makazi yao kutokana na migogoro mashariki mwa DRC, shirika la Umoja wa Mataifa la Mpango wa Chakula Duniani (WFP) linatoa msaada kupitia miradi iliyo ndani ya mfumo wa kuzuia na kupunguza hatari za unyanyasaji wa kijinsia unaohusishwa na uhakika wa kupata chakula.  Takriban wanawake 40 waliokimbia makazi yao wanaoishi katika kambi ya Bulengo jimboni Kivu Kaskazini, mashariki mwa DRC wamenufaika na mafunzo ya kusoma na kuandika tangu mwanzoni mwa mwaka huu wa 2024. Walengwa waliojifunza kusoma na kuandika wanashuhudia kwamba yamebadilisha maisha yao. Mwandishi wetu wa habari mashariki mwa DRC, George Musubao alisafiri kutoka Beni hadi Goma katika kambi ya Bulengo kuzungumza na mmoja wao.

From Pencils to Pixels: The Animation Celebration Podcast
From Pencils to Pixels #34 – The Music of “Exploding Kittens” with Jina An and Shirley Song

From Pencils to Pixels: The Animation Celebration Podcast

Play Episode Listen Later Oct 14, 2024 39:12


Scott and Michael welcome their guests, composers Jina An and Shirley Song, to discuss scoring the music for hit Netflix animated series, “Exploding Kittens.” Join them for a fun discussion about comedy, animation, music genres, and the creative process behind them all! Find more From Pencils to Pixels: The Animation Celebration Podcast at: www.rf4rm.com Follow the show on X/Twitter: @pencil2pixel Follow the hosts on social media: Scott on X/Twitter: @scotthopkins76 Michael on X/Twitter: @mlyonsfl I Michael's website: www.wordsfromlyons.com Rate, review, & subscribe to From Pencils to Pixels on Apple podcasts I Google Play I Stitcher    

The Digital Executive
Scaling Success in FinTech: Product Innovation, Client Retention, and Embracing Emerging Tech with Gina Choi | Ep 954

The Digital Executive

Play Episode Listen Later Oct 2, 2024 10:16


Send us a textIn this episode of The Digital Executive, Brian Thomas interviews Jina Choi, Chief Product Officer at Kwillt and a seasoned fintech leader with over 20 years of experience. Jina discusses her approach to product development in both B2C and B2B environments, emphasizing the importance of understanding the unique needs of each audience. She shares her strategies for launching cutting-edge platforms on tight deadlines while ensuring product quality, innovation, and scalability.Jina also highlights the critical role of client retention, the value of human connection in a tech-driven world, and her approach to integrating emerging technologies like AI and blockchain into product development. With insights into managing rapid growth and future-proofing digital platforms, Jina offers valuable advice for navigating the evolving fintech landscape.

Radio Islam
Practical Productivity | Dr Zaheera Jina Asvat

Radio Islam

Play Episode Listen Later Sep 30, 2024 15:23


Practical Productivity | Dr Zaheera Jina Asvat by Radio Islam

Jyoti Dham
Jina Pala Gura Da Phareya.

Jyoti Dham

Play Episode Listen Later Sep 8, 2024 5:58


Guru Viking Podcast
Ep270: Wandering Monk - Jina Kusala

Guru Viking Podcast

Play Episode Listen Later Sep 6, 2024 63:41


In this episode, filmed on location in Kathmandu, Nepal, I am joined by Jina Kusala, originally from Norway and now a wandering Theravada monk. Jina tells the story of his traumatic childhood, the mental suffering and instability it caused, and how the Buddhist path provided a means to reckon with the pain. Jina shares encounters with religious leaders such as Mingyur Rinpoche, recounts meeting previous podcast guest Bhante Jason and reflects on the profound time he spent under Bhante's instruction. Jina recalls his Himalayan wanderings and cave retreats, discusses the power of the 4 foundations of mindfulness, details the differences between Hīnayāna and Vajrayāna realisations, and explains why with incremental progress one may aim for sainthood. … Video version: https://www.guruviking.com/podcast/ep270-wandering-monk-jina-kusala Also available on Youtube, iTunes, & Spotify – search ‘Guru Viking Podcast'. … Topics include: 00:00 - Intro 00:56 - Background and recent travels in the Himalayas 01:40 - How to work with hardship 03:09 - Upbringing in Norway and traumatic childhood 04:30 - Healing trauma 06:21 - Redeeming trauma and helping others 07:42 - Chasing satisfaction 10:22 - Finishing high school and seeking 13:25 - Discerning the wholesome vs unwholesome 14:46 - First encounter with Buddhism in Norway 15:44 - Travel to Sri Lanka and attempts at meditation 17:17 - Jina suggests a location change 17:57 - Meeting Bhante Jason 19:19 - Sophisticated use of the 4 Foundations of Mindfulness 22:46 - Entering the 4 jhānas, clearing emotional blockages, 23:34 - Mechanism of gaining insight 24:31 - The Buddhist path in brief 24:59 - 3 kinds of delusion 25:35 - The 4 noble truths 29:14 - When the mind lets go 31:55 - Leaving Bhante Jason's hermitage 35:21 - Making Jina's broken mind functional 38:25 - A scripture parade 38:45 - A period of chaos and profound uncertainty 40:05- Stabilising and improvement of functionality 40:59 - Interpersonal difficulties and wounds of mistrust 42:52 - Manifestation of unprocessed trauma 43:55 - Becoming a cure for trauma 44:53 - Himalayan Buddhism as Jungian shadow work 45:49 - Jina's karmic background with Himalayan Buddhism 47:02 - Differences between the realisations of early vs later Buddhist forms 49:16 - Formless states vs true nirvāṇa 52:25 - A private meeting with Mingyur Rinpoche and 5 month retreat in caves 54:40 - Plans for India 55:21 - Reflections on the wandering life 57:05 - Developing intuition and accessing the magical mind 58:59 - Reflections on the path, meaning, and ennobling character 01:00:43 - Learning from your mistakes 01:02:29 - Incremental improvement 01:02:32 - Aiming for sainthood … Kathmandu Interviews playlist: - https://youtube.com/playlist?list=PLlkzlKFgdknwvU82dU487LhF_mF4AkGek&si=gFGJpi-fnLtxeyZ5 Previous Episode with Bhante Jason: - https://www.guruviking.com/podcast/ep12-bhante-jason-guru-viking-interviews … For more interviews, videos, and more visit: - https://www.guruviking.com Music ‘Deva Dasi' and ‘Meditation' by Steve James

Geektown Radio - TV News, Interviews & UK TV Air Dates
Interview: Scoring Netflix's ‘Exploding Kittens' with Composers Jina An & Shirley Song

Geektown Radio - TV News, Interviews & UK TV Air Dates

Play Episode Listen Later Aug 30, 2024 34:01


Welcome to a new Geektown Behind The Scenes podcast. This week I'm chatting with Jina An & Shirley Song, the composers behind Netflix's brilliant adult animated series ‘Exploding Kittens'.If you haven't yet caught ‘Exploding Kittens', the series is inspired by the beloved card game from Matthew Inman of The Oatmeal webcomic, Elan Lee, and Shane Small. After a meeting in heaven, there is a general consensus that Earth sucks, so God (voiced by the wonderful Tom Ellis from ‘Lucifer') gets fired and sent to Earth to reconnect with humanity. The catch? He's trapped in the body of a chubby house cat. As part of Godcat's rehabilitation, he moves in with a dysfunctional family and tries to solve their problems, but ends up spending a lot of time chasing laser pointers. And to top it off, Godcat's next-door neighbour, who is also a cat, turns out to be none other than his nemesis, the Antichrist. It is brilliant, hilariously funny, and Season 1 is all on Netflix right now!Jina and Shirley have been composing together pretty much since college, and were tasked with creating a soundtrack that captures the show's explosive energy. This also involved them touching on a huge array of musical genres like 8-bit nostalgia, energetic Cajun flair, heavy metal rock, and both choral and orchestral scores to bring the madcap adventures and quirky characters to life.Outside of ‘Exploding Kittens', the duo are also known for working on the beloved Netflix series ‘XO, Kitty', and have a number of other projects coming up which we talk about in the interview.Support this show http://supporter.acast.com/geektown. Hosted on Acast. See acast.com/privacy for more information.

Nation of Animation
The Music of Exploding Kittens with Jina An & Shirley Song

Nation of Animation

Play Episode Listen Later Aug 10, 2024 52:19


This week, Ryan and Brooke are booming with joy to talk about Netflix' Exploding Kittens, and learn about the world of soundtrack composition and scoring from composers Jina An and Shirley Song! Jina and Shirley share their process, how they adapted a fast and frenzied party game into an animated series, dream projects and more! And, a surprising new temptation might threaten to capture Brooke's heart and possibly ruin her life, if we can turn it into content. Find out more by listening! Learn more about Jina An on her website: https://jinaanmusic.com/ Learn more about Shirley Song at her website: https://www.shirleysong.com/ Follow our bluesky @nationofanimation and our Instagram and Twitter @cartoonbookclub, and follow our hosts @thebrookesmith and @ryanwithcheese on Twitter & http://brookeerinsmith.com http://ryangstevens.com & Support secret projects on Venmo @nationofanimation BIG THANKS TO:Jacob Menke for our theme Follow them @menkemaster & Urvashi Lele for our art Learn more about Urvashi Lele's animations by visiting http://www.sirpeagreenstudios.com and follow their endeavors on instagram at @sirpeagreen and @maisonaudmi & a very special thanks to: Jina An and Shirley Song for talking to us! The State of Animation is EXPLODING! Shows we talked about: Exploding Kittens Real World Recs: Brooke: Barry, streaming onMax Ryan: M. Night Shyamalan's Trap This podcast is a part of Audio Mint. If you want to follow us, check us out on Instagram(@audiomintchi) or on Facebook, at Audio Mint. If you wanna support us even more, check out our Patreon by searching Audio Mint on the app or the website!

AWR Swahili / Kiswahili / لغة سواحلية
Jina la Yesu li heri, Unatumiaje muda wako

AWR Swahili / Kiswahili / لغة سواحلية

Play Episode Listen Later Aug 10, 2024 29:00


Jina la Yesu lina Nguvu, Tumepewa muda na Mungu utumie vizuri

Mindful In Minutes Meditation
Recall A Past Life Meditation Ft. Jina Seer

Mindful In Minutes Meditation

Play Episode Listen Later Jul 21, 2024 31:22


Have you ever wanted to go a meditation to see a past life? Now you can. In this guest meditation Jina Seer of Past Lives and the Divine leads you through a 30 minute meditation to help you unlock your past life. Check out Jina and her work Follow Jina on instagram Learn more about Jina Listen to Jina's Podcast on Apple podcasts and Spotify Access Jina's longer PLR sessions below Slower paced PLR Release blocks practice Scared of seeing a past life? Listen to this More Mindful in Minutes Books Order Meditation For The Modern Family You Are Not Your Thoughts: An 8-Week Anxiety Guided Meditation Journal **Download 4 sample days from You Are Not Your Thoughts Here** Join MIM on Patreon here Order Meditation For The Modern Family Meditation TT 40-Hour Meditation Teacher Training is now open for enrollment Learn more and enroll here Let's Connect Email Kelly your questions at info@yogaforyouonline.com Follow Kelly on instagram @yogaforyouonline Please rate, subscribe and review (it helps more than you know!) Learn more about your ad choices. Visit megaphone.fm/adchoices

Mindful In Minutes Meditation
Past Lives 101 w/ Jina Seer

Mindful In Minutes Meditation

Play Episode Listen Later Jul 17, 2024 72:47


In this 2nd episode in the friendship series Kelly talks to friend and past life regression specialist, Jina Seer, of Past Lives and the Divine. In this chat Kelly and Jina talk all about past lives, what to expect in a session with Jina, and they talk about some weird sh*? and have a great time doing it. Learn more about Jina Listen to Past Lives and the Divine Podcast on Apple Podcasts and Spotify Follow Jina on instagram Visit Jina's website to learn more and join her email list to get notified of when her session calendar opens More Mindful in Minutes Join Kelly in 2024 or 2025 Icelandic Homecoming Iceland October 5-10, 2024 Learn more here Wild and Wondrous Woman Scottish Highlands May 5-10, 2025 Learn more here Books Order Meditation For The Modern Family You Are Not Your Thoughts: An 8-Week Anxiety Guided Meditation Journal **Download 4 sample days from You Are Not Your Thoughts Here** Join MIM on Patreon here Order Meditation For The Modern Family Let's Connect Email Kelly your questions at info@yogaforyouonline.com Follow Kelly on instagram @yogaforyouonline Please rate, subscribe and review (it helps more than you know!) Learn more about your ad choices. Visit megaphone.fm/adchoices

Design Systems Podcast
113 - Config 2024: Highlights and Hot Takes with Jina Anne and Adekunle Oduye

Design Systems Podcast

Play Episode Listen Later Jun 28, 2024 21:03 Transcription Available


Recording during Config 2024, this bonus episode of The Design Systems Podcast features long-time friends of the Pod, Jina Anne and Adekunle Oduye. They share their fresh takes on the latest Figma keynote, including the buzz around AI-generated designs and the revamped Figma UI. The conversation explores the future of design systems, deeper code integrations, and personalized user experiences enabled by AI. Plus, Jina and Adekunle give a sneak peek into their upcoming book! View the transcript of this episode.Check out our upcoming events.GuestJina Anne is a designer, developer, and community advocate. She founded Clarity (ClarityConf.com), the premier design systems community conference, and organizes meet-ups in the San Francisco area.).Jina is also a published author and public speaker. Adekunle Oduye is UX engineer specializing in design systems, front-end development and prototyping. He is also an avide travel fan, host of the Code & Pixels Podcast, and working with Jina Anne to write a book. HostChris Strahl is co-founder and CEO of Knapsack, host of @TheDSPod, DnD DM, and occasional river guide. You can find Chris on Twitter as @chrisstrahl and on LinkedIn.SponsorSponsored by Knapsack, the design system platform that brings teams together. Learn more at knapsack.cloud.

Les Cast Codeurs Podcast
LCC 313 - 313 CCL

Les Cast Codeurs Podcast

Play Episode Listen Later Jun 15, 2024 79:45


Katia, Guillaume, Emmanuel et Antonio discutent Kotlin, Micronaut, Spring Boot, Quarkus, Langchain4j, LLMs en Java, builds reproductible et la question AMA du jour, comment fait-on carrière de dev à 40 ans ? Enregistré le 14 juin 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-313.mp3 News Langages Android avec Kotlin Multiplatform our Flutter avec Dart ? https://developers.googleblog.com/en/making-development-across-platforms-easier-for-developers/ Des licenciements ont continué chez Google et l'équipe Flutter/Dart comme plein d'autres ont été touchées, mais sur les réseaux sociaux les gens ont pensé que Google désinvestissait dans Flutter et Dart. Par ailleurs, côté Android, ils poussent plutôt du côté de Kotlin et KMP, mais naturellement aussi les gens se sont demandé si Google avait pris parti pour pousser plus Kotlin/KMP plutôt que Flutter/Dart. Pour essayer de mieux faire comprendre aux développeurs l'intérêt des deux plateformes, et leurs avantages et inconvénients, les directeurs des deux plateformes ont rédigé un article commun. Si l'on souhaite une expérience plus proche du hardware et des dernières nouveautés d'Android, et d'avoir aussi une UI/UX vraiment native Android, mieux vaut aller du côté de Kotlin/KMP. Si l'on souhaite par contre une expérience multiplateforme Web, mobile, desktop avec une UX commune cross-plateforme, avec également le partage de business logic à partir d'une même base de code, Flutter et Dart sont plus adaptés. Recap de KotlinConf https://x.com/gz_k/status/1793887581433971083?s=46&t=C18cckWlfukmsB_Fx0FfxQ RPC multiplatform la pres Grow with the flow montrant la reecriture en kotlin plus simple que des solutions complexes ailleurs power-assert pour ecrire des tests Kotlin 2.0 et les evolutions majeures Kotlin multiplatforme mainteant stable Kotlin Compose Multiplatform continue a amturer Retour d'experience de la migration d'android jetpack vers Kotlin Multiplatform use cases de coroutines et scope Librairies Quarkus veut aller dans une fondation https://quarkus.io/blog/quarkus-in-a-foundation/ ameliorer l'adoption (encore plus), ameliorer la transparence, et la collaboration, encourager la participatiopn multi vendeur Premiere etape : une gouvernance plus overte Deuxieme etape: bouger dans uen foundation Echange avec la communaute sur la proposition et les fondations cibles Des criteres pour al foudnation (notamment la rapidite de delivery Quarkus 3.11 https://quarkus.io/blog/quarkus-3-11-0-released/ Websocket.next en cours Dev services pour observabilite (grafana, jaegel, open telemetry extension infinispan cache #38448 - Observability extensions - Dev Services, Dev Resources, LGTM #39836 - Infinispan Cache Extension #40309 - WebSockets Next: client endpoints #40534 - WebSockets Next: initial version of security integration #40273 - Allow quarkus:run to launch Dev Services #40539 - Support for OIDC session expired page #40600 - Introduce OidcRedirectFilter LangChain4j 0.31 est sorti https://github.com/langchain4j/langchain4j/releases/tag/0.31.0 Recherche Web pour le RAG avec Google et Tavily RAG avec les bases de données SQL (expérimental) Récupération des resources remontées par le RAG lorsque AiServices retourne un Result Observabilité LLM pour OpenAI pour être notifié des requêtes, réponses et erreurs Intégration de Cohere (embedding), Jina (embedding et re-ranking scoring), Azuere CosmosDB comme embedding store Mise à jour de Gemini avec le parallel function calling et les instructions système Spring Boot 3.3.0 est sorti https://spring.io/blog/2024/05/23/spring-boot-3-3-0-available-now support Class Data Sharing Micrometer sipport de spantag etc Amelioration Spring Security comme JwtAuthenticationCovnerter support docker compose pour les images container bitnami Virtual thread pour les websockets Support sBOM via an actuator SNI for embedded web servers une nouvelle doc via antora Micronaut 4.5 est sortie https://github.com/micronaut-projects/micronaut-platform/releases/tag/v4.5.0 Le serveur basé sur Netty inclus la détection d'opération bloquante et les modules l'utilisant indiqueront à l'utilisateur quand certaines opérations peuvent être redirigée plutôt sur un virtual thread ou dans le thread pool IO Micronaut Data inclus le support de la multitenance avec partitionnement par discriminateur pour JDBC et R2DBC Micronaut Data rajoute le pagination par curseur pour JDBC et R2DBC (important aussi pour Jakarta Data) Support des annotations Jakarta Servlet pour configurer par exemple les servelet filters Support virtual thread et HTTP/2 Un nouveau module JSON Schema pour générer des JSON Schemas pour les records Java Un nouveau module Source Gen pour faire de la génération de source pour Java et Kotlin cross-language Un nouveau module Guice pour importer des modules Guice existants Web Angular 18 est sorti https://blog.angular.dev/angular-v18-is-now-available-e79d5ac0affe Support expérimental pour la détection de changement sans zone Angular.dev est désormais le nouveau site pour les développeurs Angular Material 3, les “deferrable views”, le “built-in control flow” sont maintenant stables et intègrent une série d'améliorations Améliorations du rendu côté serveur telles que le support de l'hydratation i18n, un meilleur débogage, le support de l'hydratation dans Angular Material, et la event replay qui utilise la même bibliothèque que Google Search. Data et Intelligence Artificielle Une version pure Java du LLM Llama3 de Meta https://github.com/mukel/llama3.java/tree/main utilise la future API Vector de Java JLama, un moteur d‘exécution de LLM en Java avec l'api vector https://www.infoq.com/news/2024/05/jlama-llm-inference-java/ basé sur llama.c qui est un moteur d'inference de LLM (l'execution des requetes) jlama implementé avec vector APIs et PamanaTensorOperations plusisures alternatives (native binding, iml0ementation pure en java, scala, kotlin) Target Speech Hearing https://www.infoq.com/news/2024/05/target-speech-hearing/ Nouveau algo Deep Learning de l'Université de Washington permet d'écouter une seule personne de ton choix et effacer tout le bruit autour le système nécessite que la personne portant les écouteurs appuie sur un bouton tout en regardant quelqu'un parler ou simplement en le fixant pendant trois à cinq secondes Permet à un modèle d'apprendre les schémas vocaux du locuteur et de s'y attacher pour pouvoir les restituer à l'auditeur, même s'il se déplace et cesse de regarder cette personne. Selon les chercheurs, cela constitue une avancée significative par rapport aux écouteurs à réduction de bruit existants, qui peuvent annuler efficacement tous les sons, mais ne peuvent pas sélectionner les locuteurs en fonction de leurs caractéristiques vocales. Actuellement, le système ne peut enregistrer qu'un seul locuteur à la fois. Une autre limitation est que l'enregistrement ne réussira que si aucune autre voix forte ne provient de la même direction. L'équipe a mis en open source leur code et leur jeu de données afin de faciliter les travaux de recherche futurs pour améliorer l'audition de la parole cible. Outillage Utiliser LLM pour migrer du framework de testing https://www.infoq.com/news/2024/06/slack-automatic-test-conversion/ Slack a migré 15.000 tests de Enzyme à React Testing Library avec un succès de 80% Migration nécessaire pour le manque de support de Enzyme pour React 18 L'équipe a essayé d'automatiser la conversion avec des transformations AST, mais n'a atteint que 45 % de succès à cause de la complexité des méthodes d'Enzyme et du manque d'accès aux informations contextuelles du DOM. L'équipe a utilisé Claude 2.1 pour la conversion, avec des taux de réussite variant de 40 % à 60 %, les résultats dépendant largement de la complexité des tâches. Suite aux résultats insatisfaisants, l'équipe a décidé d'observer comment les développeurs humains abordaient la conversion des tests unitaires. Les développeurs humains utilisaient leurs connaissances sur React, Enzyme et RTL, ainsi que le contexte du rendu et les conversions AST de l'outil initial pour mieux convertir les tests unitaires. Finalement les ingénieurs de Slack ont combiné transformations AST et LLM en intégrant des composants React rendus et des conversions AST dans les invites, atteignant un taux de réussite de 80 % démontrant ainsi la complémentarité de ces technologies. Claude 2.1 est un modèle de langage de grande taille (LLM) annoncé en novembre 2023 par Anthropic. Il inclut une fenêtre contextuelle de 200 000 tokens, des réductions significatives des taux d'hallucination du modèle, des invites système et permet l'utilisation d'outils. Depuis, Anthropic a introduit la famille de modèles Claude 3, composée de trois modèles distincts, avec des capacités multimodales et une compréhension contextuelle améliorée. Un arbre de syntaxe abstraite (AST) est une représentation arborescente de la structure syntaxique abstraite du code source écrit dans un langage de programmation. Chaque nœud de l'arbre représente une construction du code source. Un arbre de syntaxe se concentre sur la structure et le contenu nécessaires pour comprendre la fonctionnalité du code. Les AST sont couramment utilisés dans les compilateurs et les interpreters pour analyser et examiner le code, permettant diverses transformations, optimisations et traductions lors de la compilation. IDE de test de JetBrains https://blog.jetbrains.com/qa/2024/05/aqua-general-availability/ Aqua, le premier IDE conçu pour l'automatisation des tests, supporte plusieurs langages (Java, Python, JavaScript, TypeScript, Kotlin, SQL) et frameworks de tests (Selenium, Playwright, Cypress). Pourquoi ? Les tests d'applications nécessitent des compétences spécifiques. Aqua, un IDE adapté, est recommandé par les ingénieurs en automatisation des tests. Aqua propose deux plans de licence : un gratuit pour les usages non commerciaux et un payant pour les usages commerciaux. cam me parait un peu contre intuitif a l'heure du devops et du TDD de faire des outils dédiés et donc des equipes ou personnes dédiées Méthodologies Les 10 principes à suivre, selon le créateur de cURL, pour être un bon BDFL (Benevolent Dictator For Life) https://daniel.haxx.se/blog/2024/05/27/my-bdfl-guiding-principles/ Être ouvert et amical Livrer des produits solides comme le roc Être un leader de l'Open Source Privilégier la sécurité Fournir une documentation de premier ordre Rester indépendant Répondre rapidement Suivre l'actualité Rester à la pointe de la technologie Respecter les retours d'information Dans un vieil article de Artima, Guido Van Rossum, le créateur de Python et premier BDFL d'un projet, se remémore un échange de 1995 qui est à l'origine de ce concept https://www.artima.com/weblogs/viewpost.jsp?thread=235725 Guido Van Rossum a été le premier à endosser ce “rôle” Un site compréhensif sur les build reproductibles https://reproducible-builds.org longue doc de la definition aux méthodes pour resoudre des problèmes spécifiques Masterclass de Fabien Olicard: Le Palais Mental https://www.youtube.com/watch?v=u6wu_iY4xd8 Technique pour retenir de l'information plus longtemps que dans sa mémoire courte Les APIs web ne devraient pas rediriger HTTP vers HTTPS https://jviide.iki.fi/http-redirects grosso modo le risque majeur est d'envoyer des données confidentielles en clair sur le réseau le mieux serait de ne pas rediriger vers HTTPS, mais par contre de retourner une vraie erreur explicite notamment les clés d'API et c'est facile de ne pas le,voir vu les redirects. Sécurité Blog de GitHub sur la provenance et l'attestation https://github.blog/2024-04-30-where-does-your-software-really-come-from/ Discute les concepts de securisation de chainne d'approvisionnement de sogiciel et comment elles s'articulent entre elle. A haut niveau discute les hash pour garantir le meme fichier La signature asymetrique pour prouver que j'ai signé (e.g. le hash) et donc que je garantis. L'attenstation qui declare des faits sur un artifact attestation de provenance: source code et instructions de build (SLSA provenance) mais il faut garantir les signature avec une autorite de certification et avec des certificats a courte vide idealement, c'est sigstore MEtionne aussi The Update Framework pour s'appuyer sur cela et garantir des undates non compromis Keycloak 25 est sorti https://www.keycloak.org/2024/06/keycloak-2500-released.html Argon2 pour le hashing de mots de passe Depreciation des adaptateurs (Tomcat, servlet etc) Java 21 et depreciation de Java 17 session utilisatur persistente meme pour les instances online (pour survivre a une rotation de keycloak ameliorations autour des passkeys management et health endpoint sur un port different Et plus Demande aux cast codeurs A 40 ans, tu peux encore être codeur reconnu ? Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Mobilis In Mobile 2024 - Nantes (France) 18 juin 2024 : BSides Strasbourg 2024 - Strasbourg (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 26-28 juin 2024 : Breizhcamp 2024 - Rennes (France) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 6-7 septembre 2024 : Agile Pays Basque - Bidart (France) 17 septembre 2024 : We Love Speed - Nantes (France) 17-18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 25-26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11-12 octobre 2024 : SecSea2k24 - La Ciotat (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 30-31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30-31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024-3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13-14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 20-22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 27-28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 3-5 décembre 2024 : APIdays Paris - Paris (France) 4-5 décembre 2024 : DevOpsDays Paris - Paris (France) 4-5 décembre 2024 : Open Source Experience - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 16-18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Business Innovators Radio
Interview with Jina St. James, Chief Visionary, CEO with St. James Team: Brokered By eXp Realty

Business Innovators Radio

Play Episode Listen Later Jun 6, 2024 22:05


Welcome to the world of Jina St James, where real estate isn't just a job – it's an adventure in the wilds of Colorado Springs! With over a decade of taming the real estate market under her belt, Jina's not just a seasoned pro; she's a real estate ninja, slicing through the complexities of buying and selling homes with the precision of a samurai and the grace of a ballet dancer.Picture this: 15 years ago, Jina leaped into the real estate jungle, armed with nothing but her wits and an unbreakable determination. Fast forward to today, and she's the Indiana Jones of New Home Sales, uncovering hidden gems and leading families to their dream homes like a modern-day treasure hunter.Her battle cry? “You dream it, I unearth it.” And she's not just talking the talk. Jina's love for this real estate rodeo is as real as the Colorado mountains – and just as majestic.But hold onto your hats, because there's more! Jina's not just a wizard with New Builds; she's a real estate polymath, juggling negotiations, marketing, and team coaching with the finesse of a seasoned circus performer. She's not just playing the game; she's rewriting the rulebook.Her magic potion? A ‘check every box' attitude mixed with a soothing, persuasive charm that turns the nerve-wracking journey of buying or selling a house into a joyride in a convertible. And her community spirit? It's like she's the Dumbledore of real estate, mentoring the next generation of property prodigies.And let's not forget her superhero alter ego: a 22-year veteran in the marathon of marriage and a mom to a mini-army of six. Jina's driven by a ‘reach for the stars' philosophy, proving that you can indeed have your cake and eat it too – and in her case, probably sell the bakery as well!So, if you're in the market and looking for a real estate experience that's as thrilling as a blockbuster action movie but smooth as a spa retreat, call Jina St James. She's not just selling homes; she's leading a crusade to find your kingdom!Learn more: https://www.stjamesteam.com/Elite Real Estate Leaders Podcasthttps://businessinnovatorsradio.com/elite-real-estate-leaders-podcastSource: https://businessinnovatorsradio.com/interview-with-jina-st-james-chief-visionary-ceo-with-st-james-team-brokered-by-exp-realty

Colorado Real Estate Leaders
Interview with Jina St. James, Chief Visionary, CEO with St. James Team: Brokered By eXp Realty

Colorado Real Estate Leaders

Play Episode Listen Later Jun 6, 2024 22:05


Welcome to the world of Jina St James, where real estate isn't just a job – it's an adventure in the wilds of Colorado Springs! With over a decade of taming the real estate market under her belt, Jina's not just a seasoned pro; she's a real estate ninja, slicing through the complexities of buying and selling homes with the precision of a samurai and the grace of a ballet dancer.Picture this: 15 years ago, Jina leaped into the real estate jungle, armed with nothing but her wits and an unbreakable determination. Fast forward to today, and she's the Indiana Jones of New Home Sales, uncovering hidden gems and leading families to their dream homes like a modern-day treasure hunter.Her battle cry? “You dream it, I unearth it.” And she's not just talking the talk. Jina's love for this real estate rodeo is as real as the Colorado mountains – and just as majestic.But hold onto your hats, because there's more! Jina's not just a wizard with New Builds; she's a real estate polymath, juggling negotiations, marketing, and team coaching with the finesse of a seasoned circus performer. She's not just playing the game; she's rewriting the rulebook.Her magic potion? A ‘check every box' attitude mixed with a soothing, persuasive charm that turns the nerve-wracking journey of buying or selling a house into a joyride in a convertible. And her community spirit? It's like she's the Dumbledore of real estate, mentoring the next generation of property prodigies.And let's not forget her superhero alter ego: a 22-year veteran in the marathon of marriage and a mom to a mini-army of six. Jina's driven by a ‘reach for the stars' philosophy, proving that you can indeed have your cake and eat it too – and in her case, probably sell the bakery as well!So, if you're in the market and looking for a real estate experience that's as thrilling as a blockbuster action movie but smooth as a spa retreat, call Jina St James. She's not just selling homes; she's leading a crusade to find your kingdom!Learn more: https://www.stjamesteam.com/Elite Real Estate Leaders Podcasthttps://businessinnovatorsradio.com/elite-real-estate-leaders-podcastSource: https://businessinnovatorsradio.com/interview-with-jina-st-james-chief-visionary-ceo-with-st-james-team-brokered-by-exp-realty

RUSK Insights on Rehabilitation Medicine
Dr. Jina Libby and Dr. Laurenie Louissaint: Global Health Spotlight: Rehabilitation Medicine in Namibia, Part 2

RUSK Insights on Rehabilitation Medicine

Play Episode Listen Later May 22, 2024 23:21


Dr. Jina Libby completed her PM&R residency in Michigan. Her dedication to that profession and sports medicine extends beyond clinical practice as she serves on the executive committee for the International Rehab and Global Health Committee of AAPM&R. Her fervor for education is evident through her commitment to teaching physical medicine and rehabilitation, not only locally, but also by championing its integration on an international scale. Beyond her current role as a fellow physician, Dr. Laurenie Louissaint's compassionate spirit leads her on frequent global impact trips, where she provides critical medical support to underserved communities, such as Haiti and Namibia. She also is an active member of the New York City cycling community while also providing medical care for injured cyclists and developing related research. Part 1 The discussion in Part 1 included the following items: demographic aspects of Namibia, major health problems in that nation, how health care is financed, similarities with western allopathic health practices, use of traditional and alternative health care interventions, status of health professions educational institutions, and nature of the auspices sponsoring the visitation trip by U.S. clinicians to that country. Part 2 The discussion in Part 2 included the following items: types of health professionals in the group visiting Namibia, kinds of Namibian practitioners interacted with during the visit, most evident aspects of health care in that nation where improvements would appear to be beneficial, possibly reversing the flow of clinicians to enable Namibians to spend time in U.S. clinical facilities, and health professional literature produced in that country.

RUSK Insights on Rehabilitation Medicine
Dr. Jina Libby and Dr. Laurenie Louissaint: Global Health Spotlight: Rehabilitation Medicine in Namibia, Part 1

RUSK Insights on Rehabilitation Medicine

Play Episode Listen Later May 8, 2024 34:29


Dr. Jina Libby completed her PM&R residency in Michigan. Her dedication to that profession and sports medicine extends beyond clinical practice as she serves on the executive committee for the International Rehab and Global Health Committee of AAPM&R. Her fervor for education is evident through her commitment to teaching physical medicine and rehabilitation, not only locally, but also by championing its integration on an international scale. Beyond her current role as a fellow physician, Dr. Laurenie Louissaint's compassionate spirit leads her on frequent global impact trips, where she provides critical medical support to underserved communities, such as Haiti and Namibia. She also is an active member of the New York City cycling community while also providing medical care for injured cyclists and developing related research. Part 1 The discussion in Part 1 included the following items: demographic aspects of Namibia, major health problems in that nation, how health care is financed, similarities with western allopathic health practices, use of traditional and alternative health care interventions, status of health professions educational institutions, and nature of the auspices sponsoring the visitation trip by U.S. clinicians to that country. Part 2 The discussion in Part 2 included the following items: types of health professionals in the group visiting Namibia, kinds of Namibian practitioners interacted with during the visit, most evident aspects of health care in that nation where improvements would appear to be beneficial, possibly reversing the flow of clinicians to enable Namibians to spend time in U.S. clinical facilities, and health professional literature produced in that country.

Drzazgi Świata
045 Na ulicach od 45 lat. Czym jest rewolucja? - Emilia Pluskota i Anahita Rezaei

Drzazgi Świata

Play Episode Listen Later Apr 24, 2024 66:31


Ostatnia rewolucja w Iranie, która wybuchła śmierci Kurdyjki Mashy Amini, aresztowanej za niezgodne z obowiązującym w kraju prawem noszenie hidżabu, była długa, masowa oraz rozprzestrzeniła się w Internecie oraz po całym świecie. Ale półtora roku od jej wybuchu warto zadać pytanie, czy coś ona zmieniła? Czy setki milionów udostępnień, hasztagów i filmików przyniosły jakiś skutek?Świat chciałby, żeby te skutki i zmiany były natychmiastowe, jasno widoczne i spekakulaerne, jak obalenie dyktatury. Ale czy zawsze tak wygląda rewolucja, kiedy różni ludzie mają różną wizję wolności? I jak bardzo Irańczycy są zmęczeni prostestowaniem, skoro od kilkudziesięciu lat wychodzą na ulicę, a pozornie niewiele się zmienia?Do rozmowy zaprosiłam Emilię Pluskotę i Anahitę Rezaei. Emilia to dokumentalistka i badaczka kulturowa zakorzeniona w nurcie edukacji globalnej. Reżyserka i producentka wykonawcza filmu dokumentalnego pt. „Jina”, opowiadającego historię irańskich kobiet biorących udział w rewolucji przeciwko Islamskiej Republice Iranu oraz producentka filmu dokumentalnego „Stolen Fish” o eksploatacji zachodniego wybrzeża Afryki przez chińskie fabryki mączki rybnej. W swojej pracy wykorzystuje narzędzia z antropologii kulturowej i skupia się na migracji oraz uchodźstwie.Anahita to artystka wizualna, która jako nastolatka spędziła kilka lat w Teheranie, by ponownie wraz z rodziną wrócić do Polski. Specjalizuje się w fotografii i produkcji filmów dokumentalnych, jest Pasjonatką przyszłości sztuki, projektowania i rzemiosła.

Faith & Family Filmmakers
Mastering Your Craft: Directing Tips with Brian Cates

Faith & Family Filmmakers

Play Episode Listen Later Apr 15, 2024 16:10 Transcription Available


Episode 28 - Mastering Your Craft: Directing Tips with Brian Cates In this members' episode of the Faith and Family Filmmakers Podcast, host Jaclyn Whitt talks with Writer / Director / Producer Brian Cates about the importance of dedicated practice, accountability, and continuous learning in the journey of mastering filmmaking. They delve into Brian's personal experiences, highlighting the value of having a time slot and a captive audience for feedback. The discussion also covers how new technologies and platforms offer opportunities for filmmakers to hone their skills regularly. Brian shares inspiring stories of individuals and teams creatively pursuing their passion in filmmaking through consistent practice and learning from their efforts. Key advice includes giving oneself a 'time slot' to create, embracing failures as learning opportunities, and the significance of persistence. The conversation further explores the nuances of directing, including working with actors, the importance of pacing, and specific tips for directing children, with insights drawn from industry professionals like Steven Spielberg. The episode encapsulates the essence of never giving up, evaluating one's dedication honestly, and the journey towards mastering the craft of filmmaking within the Christian community. Hear about:The Importance of Mastering Your CraftLeveraging Social Media and Creating AccountabilityReal-World Examples of Craft MasteryThe Journey of Learning and PersistenceDirector Tips / PacingDirector Tips / Line Readings for Child ActorsDirector Tips / Working with Experienced ActorsBrian Cates is an Emmy® Award-winning filmmaker, and is best known as the director and co-writer of the Dove Award nominated feature film, FAMILY CAMP. Over the past 15 years he has directed over 500 short films with The Skit Guys, and with them has built one of the largest faith-based short-film distribution networks in the world. Brian lives in Oklahoma City with his wife and best friend, Jina, and their three daughters.The Skit Guys online: skitguys.comBrian on Instagram: https://www.instagram.com/notbriancates/The Faith & Family Filmmakers podcast helps filmmakers who share a Christian worldview stay in touch, informed, and inspired. Releasing new episodes every Monday, we interview experts from varying fields of filmmaking; from screenwriters, actors, directors, and producers, to film scorers, talent agents, and distributors. It is produced and hosted by Geoffrey Whitt and Jaclyn Whitt , and is brought to you by the Faith & Family Filmmakers Association Support Faith & Family Filmmakers Our mission is to help filmmakers who share a Christian Worldview stay in touch, informed, and inspired. Please help by becoming a supporting member or leaving One-Time Donation.Get Email Notifications Enter the Faith & Family Screenwriting Awards festival Faith and Family Screenwriting Academy: https://www.faffassociation.com/Script Notes and Coaching: https://www.faffassociation.com/script-servicesJaclyn's Actor's Reel Script Writing Workshop: https://www.faffassociation.com/actors-reelCopyright 2024 Ivan Ann...

Faith & Family Filmmakers
500 Short Films in 20 Years, with Brian Cates

Faith & Family Filmmakers

Play Episode Listen Later Apr 15, 2024 24:52 Transcription Available


Episode 27 - 500 Short Films in 20 Years, with Brian Cates In this episode of the Faith and Family Filmmakers Podcast, host Jaclyn Whitt interviews Emmy award-winning filmmaker Brian Cates. Brian shares his journey from being passionate about movies and competitive drama during his youth to becoming the director and co-writer of the Dove Award-nominated film Family Camp. Alongside the Skit Guys, Brian has produced over 500 short films and helped build one of the largest faith based short film distribution networks in the world. The discussion delves into Brian's initial interests in acting and filmmaking, his spiritual calling to ministry, and his work with the Skit Guys and eventual transition to feature filmmaking. Brian provides insights into the creative and spiritual aspirations behind his work with the Skit Guys, emphasizing “Humor, Heart, and Him” as core elements. He also discusses the genesis and development of Family Camp, highlighting the collaborative effort and divine providence involved in making the project a reality. Listen for:Welcome and IntroductionGetting to Know Brian CatesThe World of Competitive Drama and Audiobook NarrationBrian's Journey into FilmmakingA Weekly Time Slot: Youth Group VideosA Call to Ministry, and the Birth of The Skit GuysThe Evolution of Skit Guys and Church Content DistributionTransitioning from Skits to Feature FilmsThe Impact of The Skit GuysConclusion and Where to Watch Family CampBrian Cates is an Emmy® Award-winning filmmaker, and is best known as the director and co-writer of the Dove Award nominated feature film, FAMILY CAMP. Over the past 15 years he has directed over 500 short films with The Skit Guys, and with them has built one of the largest faith-based short-film distribution networks in the world. Brian lives in Oklahoma City with his wife and best friend, Jina, and their three daughters.The Skit Guys online: skitguys.comBrian on Instagram: https://www.instagram.com/notbriancates/The Faith & Family Filmmakers podcast helps filmmakers who share a Christian worldview stay in touch, informed, and inspired. Releasing new episodes every Monday, we interview experts from varying fields of filmmaking; from screenwriters, actors, directors, and producers, to film scorers, talent agents, and distributors. It is produced and hosted by Geoffrey Whitt and Jaclyn Whitt , and is brought to you by the Faith & Family Filmmakers Association Support Faith & Family Filmmakers Our mission is to help filmmakers who share a Christian Worldview stay in touch, informed, and inspired. Please help by becoming a supporting member or leaving One-Time Donation.Get Email Notifications Enter the Faith & Family Screenwriting Awards festival Faith and Family Screenwriting Academy: https://www.faffassociation.com/Script Notes and Coaching: https://www.faffassociation.com/script-servicesJaclyn's Actor's Reel Script Writing Workshop: https://www.faffassociation.com/actors-reelCopyright 2024 Ivan Ann Productions

Conspirituality
Brief: Woman, Life, Freedom (w/Negin Shiraghaei)

Conspirituality

Play Episode Listen Later Mar 30, 2024 34:29


The death of Mahsa Jina Amini at the hands of the Islamic Republic's “morality police” in September 2022 lit a fuse on a fervent global protest movement. Jina's crime was wearing an “improper” headscarf in public. The slogan, “Woman, Life, Freedom,” which first emerged in the fight for Kurdish equality, was used by thousands of brave, defiant women in Iran (along with male supporters), who called for an end to oppression, discrimination, tyranny, and dictatorship. Eighteen months later, the Iranian government has increased video surveillance and morality police patrols, imposed even harsher penalties for female disobedience, and leaned even further into brutal torture methods and daily executions. Meanwhile, Ali Khamenei's administration floods the population with propaganda and conspiracy theories. Julian talks to Iranian human rights activist and former BBC World Service reporter, Negin Shiraghaei, about the Woman, Life, Freedom movement, which she says remains active and unstoppable. Show Notes Iran one year after Woman, Life, Freedom Iran: Alarming Surge in Executions Khamenei Refuses US Help, Citing COVID Conspiracy Theory Iran's Supreme Leader Calls Gender Equality a Zionist Plot Iranian Singer Sentenced to 3 years in Jail  for Mahsa Amini Protest Anthem Learn more about your ad choices. Visit megaphone.fm/adchoices

Kpopcast
Oakland billboard leaks TXT Tour, TWICE Vegas Concert, Hyeri x Han Sohee x Ryu Jun-yeol,

Kpopcast

Play Episode Listen Later Mar 20, 2024 115:47


We discuss TXT's concert, which was leaked through billboard advertising in Oakland, sexy K-pop comebacks, and the love triangle (?) between Hyeri, Han Sohee, and Ryu Jun-yeol. We also offer some hit replay song recommendations and provide polarizing reactions to some recent K-pop singles.Join the Kpopcast Slack: https://join.slack.com/t/kpopcast/shared_invite/zt-93kzxcv6-YNej2QkyY6vaPnhEQJxk0AHIT REPLAYS:비비 (BIBI) - Sugar Rush https://www.youtube.com/watch?v=O_B-0iPCbaUKISS OF LIFE (키스오브라이프) 'Nobody Knows' https://www.youtube.com/watch?v=unLuba-TQiECauli Flower _ No Regret (feat.ilipp&JINA) https://www.youtube.com/watch?v=JCRSSZUM17Ifrom20(프롬트웬티) - Demon https://www.youtube.com/watch?v=kHbPE7BLU8Q Hosted on Acast. See acast.com/privacy for more information.

Radio Sweden Kurdish - ڕادیۆی سوید - Radyoya Swêdê
Meceristan natobûna Swêde têxe dengdanê. Webaya berazan teqrîben nema. Jina serokwezîrî belaş sefer kirine.

Radio Sweden Kurdish - ڕادیۆی سوید - Radyoya Swêdê

Play Episode Listen Later Feb 20, 2024 2:38


-- . Nûçeyên giring yên Swêdê îro 20.02.2024 ji vê podkasta beê kurdî yê Radyoya Swêdê. Pêşkêşker: Newzad HirorîDerhêner: Lorîn Îbrahîm Berzincî

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Cloud Intelligence at the speed of 5000 tok/s - with Ce Zhang and Vipul Ved Prakash of Together AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 8, 2024 63:11


Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ​”Beyond Transformers & GPUs”, and “​Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer

Design Systems Podcast
96. Jina Anne and Adekunle Oduye: Navigating Career Pathways in Design Systems

Design Systems Podcast

Play Episode Listen Later Jan 16, 2024 44:36


Jina Anne and Adekunle Oduye join host Chris Strahl in a candid conversation that explores career pathways within the evolving realm of design systems amidst the latest market conditions. They share valuable insights on how the key to thriving in uncertain times is through understanding the nuanced dance between business language and design practice. Jina, renowned for her pioneering efforts in design tokens at Salesforce, illustrates the importance of scalability and care for user experience. Adekunle, drawing from his experience as lead design engineer at Plaid, discusses the intricate balancing act of adopting new technologies while maintaining user trust and meeting business objectives.View the transcript of this episode.Check out our upcoming events.SpeakersJina (they/she) is a design systems advocate. They founded Clarity, a design systems community conference, and they maintain the Design Systems Slack. Jina co-chairs the Design Tokens Community Group. And on the Sass core team, they lead the brand and website design and development. Jina is also recognized as a Google Developers Expert (in Web Technologies [UI and Tooling]). Jina has been making websites as a hobby for about 30 years. They've worked professionally in the industry for 22 years (19 of those years working with design systems). They have been said to be one of the most cheerful goths.Adekunle Oduye (Add-eh-koon-lay Oh-due-yay) is an UX Engineer born / breed / based in Brooklyn, New York. Currently he's at Plaid, where he's helping to build Threads, Plaid's official design system. Outside of work he's a coach, speaker, and co-hosts the Code and Pixels podcast. He's very passionate about design systems, prototyping, and front-end development. When he's not building software, you can probably find him reading up on Stoicism or planning my next adventure.HostChris Strahl is co-founder and CEO of Knapsack, host of @TheDSPod, DnD DM, and occasional river guide. You can find Chris on Twitter as @chrisstrahl and on LinkedIn.SponsorSponsored by Knapsack, the design system platform that brings teams together. Learn more at knapsack.cloud.

Duke Basketball Report
#570 - Jared McCain's mom and dad

Duke Basketball Report

Play Episode Listen Later Dec 17, 2023 36:58


The Duke Basketball Roundup welcomes Jina and Lance McCain to the podcast, the parents of Duke freshman star Jared McCain. The DBR podcase has interviewed dozens of Duke players in its time, but this is one of the few times they have gotten to sit down with the parents of a Dukie to find out what that is like. The McCains provide plenty of laughs and unique insight into their very special son -- from his ascendancy to one of the top social media stars in the world to his impressive basketball skills, we get a peek behind the curtain to see what it takes to reach this level of success... and what a large role his whole family plays in all of it. And this interview would not be complete without at least one question about Jared's potential NBA plans... the family's answer may surprise you. Learn more about your ad choices. Visit megaphone.fm/adchoices

One World, One Health

It's heartbreaking when a drought or flood causes crops in a region to fail, and children to go hungry. Kids can starve to death or endure social, economic, and health problems well into adulthood due to malnutrition. But what if there was a way to predict when these weather disasters are likely to happen, so governments, aid organizations, and residents could prepare? A team at the University of Chicago says people could already do this, using one of the best-known weather patterns: the El Niño Southern Oscillation or ENSO. “ENSO has destabilizing effects on agriculture, economic production, and social stability throughout areas of the global tropics that are teleconnected to it. It has been linked to human health outcomes directly through its effects on vector- and water-borne infectious diseases, as well as indirectly by decreasing agricultural yields and increasing food insecurity and the likelihood of conflict,” they write in a Nature Communications article. It's possible to predict this Pacific Ocean-based pattern, says Dr. Amir Jina, an Assistant Professor at the University of Chicago's Harris School of Public Policy and a Senior Fellow at the Energy Policy Institute of Chicago. In this episode of One World, One Health, listen as Dr. Jina explains how people could use predictions about El Niño years to get ahead of some of the forces that make children go hungry.

Root of Conflict
Kurdish Women and Resistance | Rez Gardi

Root of Conflict

Play Episode Listen Later Oct 5, 2023 59:51


What role did Kurdish women play in Iran's protests last year? The death of Jina Mahsa Amini at the hands of Iranian authorities sparked mass demonstrations for women's rights under the rallying cry of "Women, Life, Freedom." But the Kurdish minorities behind this resistance have largely been erased—and their movements co-opted before the international community. In this episode, we speak with Rez Gardi, a Kurdish New Zealander lawyer and human rights activist, about how, despite becoming the symbol of a revolution, non-Kurdish activists and news coverage have continually denied Jina her true name and identity. We talk about the long-lived Kurdish resistance against state oppression in Iran, Syria, and Turkey and the broader history of the Kurdish struggle for autonomy and self-determination in the Middle East.This podcast is produced in partnership with the Pearson Institute for the Study and Resolution of Global Conflicts. For more information, please visit their website at ThePearsonInstitute.org Access the transcript here.Podcast Production Credits:Interviewing: Hannah Balikci and Zareen HussainEditing: Nishita KarunProduction: Reema Saleh

Wedded Wellness
Ask Me Anything with Jina and Meredith: Crazy Stories, Stupid Self Care and the Matrix

Wedded Wellness

Play Episode Listen Later Aug 31, 2023 49:01


An ask me anything with fan favorites Jina Seer and Meredith McCowan. We get weird!! Learn More About Our Guest...Learn more about Jina Seer: www.pastlivesandthedivine.comLearn more about Meredith McCowan: www.earthlingastrology.com 

Angel Invest Boston
Jina Klapisch - Sal's Weight Coach

Angel Invest Boston

Play Episode Listen Later Jul 26, 2023 53:04


Coach Jina Klapisch who helps Sal Daher keep off 100 pounds is our guest in this episode. Sal and Jina discuss how HMR clients are able to take off and keep off significant weight with no hunger or feeling of deprivation. Check out HMR Weight Loss and Lifestyle Coaching at: https://www.hmrnatick.com/about_us Highlights: ●  Sal Daher Introduces Jina Klapisch ●  What Sets HMR Apart From the Rest ●  How HMR Works ●  "... What you're doing here is helping people discover the things that are easy for them to do repeatedly and make them into habits, and not try to do stuff that's hard, impossible for them..." ●   Decision-Free ●   Sal's Diet Journey at the Age of Thirteen ●   Oprah and HMR ●    "... when you're trying to make a very big change in your life, whether it's quitting smoking or quitting drinking, or with food, you need to take a break from that substance ... so that you can see clearly as you lose this weight and start to really figure out who I am as an eater..." ●  How Your Environment Can Affect Your Weight ●   Outlive by Peter Attia MD ●    "... The first half mile that you walk every single day is the golden half mile. The first mile you walk is the golden mile. That is going to add years to your life..." ●   Advice to the Audience ●   Jina's Background ●    "... 80% of weight loss is keeping it off, the hard part..." ●   Jina's Parting Thoughts

Wedded Wellness
Accessing Past Lives through Hypnosis & the Birth Chart, Taking Breaks, and Spiritual Entrepreneurship with Jina Seer and Meredith McCowan

Wedded Wellness

Play Episode Listen Later Jul 13, 2023 59:54


Fan favorite guests Jina Seer and Meredith McCowan return with their take on all things past lives! How to access it, what to do with the information and who in your life might be from one of your past lives! In this episode we discuss...Real talk on spiritual entrepreneurshipThe importance and also the challenge of taking a breakHow can you work with past lives through hypnosis and through astrologyWhere exactly to look for past life information in the birth chartWhat do once we have access past life memory?Discovering past lives with your loved ones via the birth chart.Recognizing people from past lives in everyday life.Have the three of us been together in a past life?Mentioned in this episode...Burnout by Emily and Amelia NagoskiAshley's recap of a past lives regression on Past Lives and the DivineAshley's recap of a life between lives on Past Lives and the DivineLearn More About Our Guests...Learn more about Jina Seer: www.pastlivesandthedivine.comFollow Jina on Instagram: @pastlives.tourguideLearn more about Meredith: www.earthlingastrology.comFollow Meredith on Instagram: @earthlingastro

Python Bytes
#343 So Much Pydantic!

Python Bytes

Play Episode Listen Later Jul 11, 2023 35:51


Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Test & Code Podcast Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: Pydantic v2 released Pydantic V2 is compatible with Python 3.7 and above. There is a migration guide. Check out the bump-pydantic tool to auto upgrade your classes Brian #2: Two Ways to Turbo-Charge tox Hynek Not just tox run-parallel or tox -p or tox --``parallel , but you should know about that also. The 2 ways Build one wheel instead of N sdists Run pytest in parallel tox builds source distributions, sdists, for each environment before running tests. that's not really what we want, especially if we have a test matrix. It'd be better to build a wheel once, and use that for all the environments. Add this to your tox.ini and now we get one wheel build [testenv] package = wheel wheel_build_env = .pkg It will save time. And a lot if you have a lengthy build. Run pytest in parallel, instead of tox in parallel, with pytest -n auto Requires the pytest-xdist plugin. Can slow down tests if your tests are pretty fast anyway. If you're using hypothesis, you probably want to try this. There are some gotchas and workarounds (like getting coverage to work) in the article. Michael #3: Awesome Pydantic A curated list of awesome things related to Pydantic!