Podcasts about kwok

  • 332PODCASTS
  • 632EPISODES
  • 39mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 13, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about kwok

Latest podcast episodes about kwok

DeFi Slate
Terence Kwok on Digital Identity, Proof of Personhood, and Eastern Crypto Markets

DeFi Slate

Play Episode Listen Later Mar 13, 2025 39:05


The line between real humans and bots is blurring.In today's episode, we chat with Terence Kwok, the founder of Humanity Protocol, to explore how they're balancing privacy and accountability. Humanity is using technology to read palm prints and vein patterns for verification, all without storing biometric data. This approach could offer a way for users to prove they're not bots without compromising their personal information. We look into Humanity Protocol's approach, technology, and place in the evolving landscape.Let's get into it. Join The Rollup Edge: https://members.therollup.coWebsite: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Brave Dynamics: Authentic Leadership Reflections
Kwok Jiachuan: COVID-19's Impact on Nonprofits, Conjunct's Shift in Business Model & The Role of Capacity Builders - E544

Brave Dynamics: Authentic Leadership Reflections

Play Episode Listen Later Mar 4, 2025 37:45


Jeremy Au reconnects with Kwok Jiachuan, his first-ever podcast guest, to reflect on their journey from school friends to army roommates to co-founders of Conjunct Consulting. They talked about the challenges of starting and scaling a social impact consultancy, from early skepticism to securing funding and navigating the evolving nonprofit landscape. They also discuss leadership lessons, the importance of sustainability, and how their work has shaped the next generation of social impact leaders. The conversation is a candid look at what it takes to build something meaningful and why community matters. 1. From friends to co-founders: Jeremy and Kwok first met at 15 in a creative arts camp, later became army roommates, and eventually teamed up to build a pioneering social impact consultancy. 2. Solving a gap no one else saw: They realized nonprofits lacked strategic help while young professionals wanted to contribute, so they created a platform that connected both. 3. Facing doubt and rejection: People dismissed their idea, fundraising was tough, and they had to figure out everything from legal structures to convincing nonprofits to trust them. 4. Turning a passion project into a real business: What started as a volunteer effort had to evolve into a structured, financially sustainable social enterprise to survive long-term. 5. Adapting to a changing landscape: The social sector professionalized with more government funding and consulting firms entering the space, forcing Conjunct to evolve its role. 6. A legacy that lives on through people: Alumni have gone on to lead impact-driven initiatives, and Tribe Consulting, founded by former members, continues the work they started. 7. Lessons for future changemakers: Passion alone isn't enough—build for sustainability, find allies in the ecosystem, and focus on long-term impact. Watch, listen or read the full insight at https://www.bravesea.com/blog/social-entrepreneur-wisdom Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts

Trash Talk
Grace Kwok - Hong Kong Environmental Campaign Committee

Trash Talk

Play Episode Listen Later Mar 3, 2025 13:27


One Night Talk 廣東話 | 溫哥華 | 香港人
Ep.377 【Jerald成日俾人話個樣串?同Eric Kwok係咪唔夾?當年改組合名Swing原來係同音樂無關?】 OneNightTalk x Jerald Chan 足本訪問Part 1 【One Guest Talk】

One Night Talk 廣東話 | 溫哥華 | 香港人

Play Episode Listen Later Feb 26, 2025 32:26


【Jerald成日俾人話個樣串?同Eric Kwok係咪唔夾?當年改組合名Swing原來係同音樂無關?】OneNightTalk x Jerald Chan 足本訪問Part 1【One Guest Talk】主持:Emily Daniel 溫哥華 | 廣東話|香港人今次我哋請到音樂奇才、歌手、樂器手、音樂監製Jerald 陳哲廬!相信認識樂隊Swing嘅人都唔會唔識Jerald,一直謙虛低調嘅,佢點解中途突然離開樂壇身為佢?佢自己最鍾意自己邊首歌?如果同佢講話鍾意Eric Kwok 作嘅1984,佢又會唔會嬲?主持人Emily同Daniel訪問前極緊張,一見面竟然打成一片? #Jeraldchan #陳哲廬 #Swing #大大公司 #EricKwok #1984 linktr.ee/Onenighttalkwww.threads.net/@onenighttalk604加入ONT討論台 與主持展開激情對話:t.me/onenighttalk604

The Astrophysics Podcast
Dr. Lindsey Kwok -- The Forensic Science of Supernovae

The Astrophysics Podcast

Play Episode Listen Later Feb 1, 2025 55:47


How do we know so much about supernovae, when all we see is this little point of light getting brighter and then dimmer over time? Given this minimal data, we can often say what type of star exploded, and even some details about how the explosion took place. Supernova astronomers are a lot like forensic scientists dusting for fingerprints and getting DNA samples at the scene of a crime. But instead of a typical crime scene, they are investigating the death of an entire solar system. Dr. Lindsey Kwok is a CIERA fellow at Northwestern University and an expert at using JWST to perform state-of-the-art forensic supernova science.

Thinking Crypto Interviews & News
Educating the World About Crypto & Integrating the XRP Ledger with Phil & Dom Kwok

Thinking Crypto Interviews & News

Play Episode Listen Later Jan 28, 2025 48:33


Dom and Phil Kwok are the co-founders of EasyA. They joined me to discuss EasyA's mission to educate the world about Web3.Topics: - EasyA Overview - Web3 Education- Integrating Blockchains and the XRP Ledger - Future of Web3 - TradFi adoption of Crypto - Crypto in 2025 https://www.easya.io/ Show Sponsor -

Trash Talk
Tze Ni Yeoh and Catrina Kwok - Colgate recyclable toothpaste tube

Trash Talk

Play Episode Listen Later Jan 20, 2025 16:40


AI Live
AI Live | The Growing Rise of Male Aesthetics with Jessie Cheung & TJ Tsay

AI Live

Play Episode Listen Later Dec 6, 2024 52:21


Discover the rapidly growing field of male aesthetics! Join us as we uncover strategies from industry leading experts Jessie Cheung & TJ Tsay for addressing the evolving landscape of male aesthetics within your practice. We'll cover everything happening in today's most popular treatments from the latest trends in men's facial aesthetics with Dr. Kwok, to invaluable insights on sexual health with Jessie Cheung, and an exclusive discussion on penile enhancements led by TJ Tsay. This is a discussion you won't want to miss!

WorkCookie - A SEBOC Podcast
Ep. 232 - Building Trust and Inclusion in Tech-Hybrid Teams

WorkCookie - A SEBOC Podcast

Play Episode Listen Later Dec 2, 2024 65:08


We explored the challenges and potential solutions for building trust, inclusion, and collaboration in tech-hybrid or remote teams. A focus on how technology supports transparent communication and fosters connections in tech-enabled environments related to socio-technical teams. (Tech-hybrid teams blend humans and robotics, AI, or other modern technology as team members.)  In this Episode: Dr. Emi Baressi, Tom Bradshaw, special guests Keith and Daniel Edwards from the Houston RobotLab, Dr. Matt Lampe, Alexander Abney-King, Nic Krueger, Rich Cruz, Dr. Martha Grajdek    Visit us https://www.seboc.com/ Follow us on LinkedIn: https://bit.ly/sebocLI Join an open-mic event: https://www.seboc.com/events   References: Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interaction at team level: a conceptual assessment of the challenges and potential HRM strategies. International Journal of Manpower, 43(1), 75–88. https://doi.org/10.1108/IJM-01-2021-0052   Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A. (2023). Defining human-AI teaming the human-centered way: A scoping review and network analysis. Frontiers in Artificial Intelligence, 6, 1250725–1250725. https://doi.org/10.3389/frai.2023.1250725 Belanger, F., Collins, R. W., & Cheney, P. H. (2001). Technology Requirements and Work Group Communication for Telecommuters. Information Systems Research, 12(2), 155–176. https://doi.org/10.1287/isre.12.2.155.9695   Belling, S. (2021). PsychoWorkplacegenerationslogy of Remote Teams: Trust, People, and Connections. In Remotely Possible (pp. 59–73). Apress. https://doi.org/10.1007/978-1-4842-7008-0_5   Boccoli, G., Gastaldi, L., & Corso, M. (2024). Transformational leadership and work engagement in remote work settings: The moderating role of the supervisor's digital communication skills. Leadership & Organization Development Journal, 45(7), 1240–1257. https://doi.org/10.1108/LODJ-09-2023-0490   Brock, J. K.-U., & von Wangenheim, F. (2019). Demystifying AI: What Digital Transformation Leaders Can Teach You about Realistic Artificial Intelligence. California Management Review, 61(4), 110–134. https://doi.org/10.1177/1536504219865226   Chin, J. H., Haring, K. S., & Kim, P. (2023). Understanding the neural mechanisms of empathy toward robots to shape future applications. Frontiers in neurorobotics, 17, 1145989. https://doi.org/10.3389/fnbot.2023.1145989   Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., & Schmorrow, D. D. (2019). Trust Engineering for Human-AI Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 322–326. https://doi.org/10.1177/1071181319631264   Flathmann, C., Schelble, B. G., Rosopa, P. J., McNeese, N. J., Mallick, R., & Madathil, K. C. (2023). Examining the impact of varying levels of AI teammate influence on human-AI teams. International Journal of Human-Computer Studies, 177, 103061-. https://doi.org/10.1016/j.ijhcs.2023.103061   Fuchs, A., Passarella, A., & Conti, M. (2024). Optimizing Delegation in Collaborative Human-AI Hybrid Teams. ACM Transactions on Autonomous and Adaptive Systems. https://doi.org/10.1145/3687130   Guznov, S., Lyons, J., Pfahler, M., Heironimus, A., Woolley, M., Friedman, J., & Neimeier, A. (2020). Robot Transparency and Team Orientation Effects on Human-Robot Teaming. International Journal of Human-Computer Interaction, 36(7), 650–660. https://doi.org/10.1080/10447318.2019.1676519   Hagemann, V., Rieth, M., Suresh, A., & Kirchner, F. (2023). Human-AI teams—Challenges for a team-centered AI at work. Frontiers in Artificial Intelligence, 6, 1252897–1252897. https://doi.org/10.3389/frai.2023.1252897   Harris-Watson, A. M., Larson, L. E., Lauharatanahirun, N., DeChurch, L. A., & Contractor, N. S. (2023). Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior, 145, 107765-. https://doi.org/10.1016/j.chb.2023.107765   Hauptman, A. I., Schelble, B. G., Duan, W., Flathmann, C., & McNeese, N. J. (2024). Understanding the influence of AI autonomy on AI explainability levels in human-AI teams using a mixed methods approach. Cognition, Technology & Work, 26(3), 435–455. https://doi.org/10.1007/s10111-024-00765-7   Hauptman, A. I., Schelble, B. G., McNeese, N. J., & Madathil, K. C. (2023). Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Computers in Human Behavior, 138, 107451-. https://doi.org/10.1016/j.chb.2022.107451   Li, M., Kwon, M., & Sadigh, D. (2021). Influencing leading and following in human–robot teams. Autonomous Robots, 45(7), 959–978. https://doi.org/10.1007/s10514-021-10016-7   Ma, L. M., Ijtsma, M., Feigh, K. M., & Pritchett, A. R. (2022). Metrics for Human-Robot Team Design: A Teamwork Perspective on Evaluation of Human-Robot Teams. ACM Transactions on Human-Robot Interaction, 11(3), 1–36. https://doi.org/10.1145/3522581   Naikar, N., Brady, A., Moy, G., & Kwok, H.-W. (2023). Designing human-AI systems for complex settings: ideas from distributed, joint, and self-organising perspectives of sociotechnical systems and cognitive work analysis. Ergonomics, 66(11), 1669–1694. https://doi.org/10.1080/00140139.2023.2281898   Traeger, M. L., Sebo, S. S., Jung, M., Scassellati, B., & Christakis, N. A. (2020). Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences, 117(12), 6370–6375. https://doi.org/10.1073/pnas.1910402117   You, S., & Robert, L. P. (2022). Team robot identification theory (TRIT): robot attractiveness and team identification on performance and viability in human–robot teams. The Journal of Supercomputing, 78(18), 19684–19706. https://doi.org/10.1007/s11227-022-04645-7

Marni on the Move
380. World Renowned DJ, Mei Kwok, Co-Founder Dune Suncare, Surfer and NYC Marathon Runner|NYC Marathon Series

Marni on the Move

Play Episode Listen Later Nov 7, 2024 36:28


Today on the podcast I'm syncing up with returning guest, Mei Kwok, Co-Founder of Dune Suncare. World Renowned DJ, and Surfer, as she gets ready to fly to NY from LA to run the NYC Marathon, her first marathon. Of course I asked all about her playlist, what she was looking forward to about her first marathon and NYC, the inspiration behind the fast growing super scuccesful company she co-founded, Dune Suncare. We chat about her race day matra, strategy, shoes, and recovery plan.  Mei and I chatted on the podcast back in 2018 on episode 45, listen here. CONNECT Mei Kwok on Instagram and Spotify Dune Suncare on Instagram Marni On The Move Instagram, TikTok, LinkedIn, or YouTube` Marni Salup on Instagram and Spotify SUBSCRIBE TO OUR NEWSLETTER Sign up for our weekly newsletter, Do What Moves You, for Marni on the Move updates, exclusive offers, invites to events, and exciting news! SUPPORT THE PODCAST Leave us a review on Apple. It's easy, scroll through the episode list on your podcast app, click on five stars, click on leave a review, and share what you love about the conversations you're listening to. Tell your friends to what you love on social. Screenshot or share directly from our stories the episode you're listening to, tag us and the guests.  

Sznurowadła myśli
Pokolenie lękowców i marzycieli. Z czym mierzą się młodzi dorośli?

Sznurowadła myśli

Play Episode Listen Later Nov 4, 2024 49:21


Czego boją się współcześni młodzi dorośli? Jak brak spontanicznej zabawy wpływa na kondycję psychiczną dzieci i nastolatków? Skąd poczucie pustki i braku ukierunkowania? I na czym młodym dziś najbardziej zależy? To nie jest odcinek diagnozujący pokolenie Z czy wyczerpujący temat generycznymi, niepoddającymi się dyskusji wnioskami. Znajdą się młodzi, którzy się w nim przejrzą, znajdą się też tacy, którzy powiedzą: „To nie o mnie”. I bardzo dobrze! Zamiar tego odcinka jest prosty – wzbudzić w Was zaciekawienie tym, co wśród wielu młodych dorosłych jest inspirujące i warte naśladowania, a co być może wymaga większej troski, często nawet zmiany.  Opowiadam o lękach, deficytach uwagi i braku spontanicznej, nieustrukturyzowanej zabawy, podczas której u nastolatków rozwijają się ważne społeczne kompetencje. Jest też o stabilnym i adekwatnym obrazie siebie, a także braku ukierunkowania i celowości, wynikającym z głębokiego rozczarowania światem zastanym. Oczywiście zachwycam się też tym, jak wiele młodych ludzi angażuje się w kwestie związane z katastrofą klimatyczną i działanie na rzecz twórczego życia. Zapraszam do odsłuchu, z otwartą głową! Książki i badania: Jonathan Haidt, „The Anxious Generation” Tomasz Sobierajski, Magdalena Kuszewska, „Pokolenia”  Gray (2011). The decline of play and the rise of psychopathology in children and adolescents. American Journal of Play. Gray (2018). Evolutionary functions of play: Practice, resilience, innovation, and cooperation. In P. Smith & J. Roopnarine (Eds.), The Cambridge handbook of play: Developmental and disciplinary perspectives. Sandseter, E. B. H., Kleppe, R., & Ottesen Kennair, L. E. (2023). Risky play in children's emotion regulation, social functioning, and physical health: an evolutionary approach. International Journal of Play, 1-13. Lee, R. L. T., Lane, S., Brown, G., Leung, C., Kwok, S. W. H., & Chan, S. W. C. (2020). Systematic review of the impact of unstructured play interventions to improve young children's physical, social, and emotional wellbeing. Nursing & Health Sciences, 22(2), 184–196.   Rodzina, dzieci, ustatkowanie – współczesne związki i zmiany pokoleniowe | Tomasz Sobierajski [ współpraca reklamowa ] z Oatly Raport Oatly, o którym wspominam w odcinku: https://www.oatly.com/testsmaku - Spis treści: 00:00 Intro i partner odcinka 05:28 Pokolenie Z, czyli jakie? 08:44 Pokolenie lękowców – lęk, deficyty uwagi, porównania społeczne, stabilny obraz siebie 14:23 Zasady Jonathana Haidta – mniej telefonów, a więcej swobodnej i nieustrukturyzowanej zabawy 24:57 Czego młodzi boją się w relacjach? 35:03 Pokolenie marzycieli – zaangażowanie, wartości, świadomość środowiskowa i twórcze życie 38:04 Pokolenie waking up 45:51 Outro WSPÓŁPRACA  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠kama.wojtkiewicz@gmail.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ INSTAGRAM  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/sznurowadla.mysli/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ PATRONITE   ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://patronite.pl/sznurowadla-mysli⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ REALIZACJA DŹWIĘKU Piotr Szonert / El Studio de Esperanto

The Path Less Chosen Pod
Intelligence is overrated: Phil Kwok, Founder & CEO of EasyA

The Path Less Chosen Pod

Play Episode Listen Later Oct 29, 2024 36:25


Meet today's incredible guest, Phil Kwok, the founder & CEO of EasyA who partners with the likes of Yale and Harvard to educate the world on blockchain and web3.

FranceFineArt

“L'Or des Ming”Fastes et beautés de la Chine impériale (14e – 17e siècle)au Musée national des arts asiatiques – Guimet, Parisdu 18 septembre 2024 au 13 janvier 2025Entretien avec Hélène Gascuel, conservatrice des collections mobilier chinois et textiles – musée Guimet, et co-commissaire de l'exposition,par Anne-Frédérique Fer, à Paris, le 21 octobre 2024, durée 20'33,© FranceFineArt.https://francefineart.com/2024/10/29/3569_l-or-des-ming_musee-national-des-arts-asiatiques-guimet/Communiqué de presse Commissariat :Arnaud Bertrand, conservateur des collections Chine et Corée, musée GuimetHélène Gascuel, conservatrice des collections mobilier chinois et textiles, musée GuimetCette exposition est organisée par le musée Guimet et le musée des Beaux-Arts de Qujiang (Xi'an, Shaanxi, Chine) dans le cadre de l'année franco-chinoise du tourisme culturel et de la célébration du 60e anniversaire des relations diplomatiques entre la France et la Chine.Les oeuvres présentées dans l'exposition appartiennent à la collection exceptionnelle de M. Kwok.Cet automne, le musée Guimet vous invite dans le faste de la cour impériale des Ming (1368-1644), à la découverte de l'art, aussi codifié que raffiné, de la parure féminine. Une exposition inédite qui révèle le luxe et la délicatesse de certaines des plus belles créations de l'orfèvrerie d'or chinoise. Son esthétique foisonnante, à la fois singulière et baroque, se retrouvait à la Cité Interdite aussi bien que dans les plus riches palais des élites fortunées. Grâce aux prêts du musée des Beaux-arts de Qujiang (Xi'an, Chine) et à son exceptionnelle collection de parures et de vases, le musée Guimet offre un éblouissant témoignage de la splendeur de l'orfèvrerie traditionnelle et de l'art du bijou, durant une période aujourd'hui considérée comme l'un des âges d'or de la civilisation chinoise.[...] Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Openwork: Inside the Watch Industry
Watch Production in China – Wesley Kwok (Nodus)

Openwork: Inside the Watch Industry

Play Episode Listen Later Oct 14, 2024 50:10


On this episode, we take a look at a topic that we've been curious about for a long time. We have discussed, both with members of the industry and amongst ourselves, the realities of production here in the United States, Japan, Switzerland and to a lesser degree Germany. But to date, we haven't discussed one of the largest centers of watch manufacturing in the world: China. From full watches to components of every imaginable variety, China is a formidable force in the world of watchmaking and watch collecting. To help us learn more about the Chinese production ecosystem and why it matters, we're proud to be joined by our friend Wes Kwok, co-founder of Nodus Watches. Hosted by Asher Rapkin and Gabe Reilly, co-founders of Collective Horology, Openwork goes inside the watch industry. You can find us online at collectivehorology.com. To get in touch with suggestions, feedback or questions, email podcast@collectivehorology.com.  

Reci Sydney's Podcast
(EN) God is in control _ Rev Edwin Kwok

Reci Sydney's Podcast

Play Episode Listen Later Oct 13, 2024 46:25


AI Live
Protocol for Patient Photos with Enoch Kwok ft. Jason Johnson from Simple Studios | AI Live

AI Live

Play Episode Listen Later Sep 6, 2024 65:50


Explore in-depth protocols for capturing and handling patient photos in partnership with Jason Johnson from Simple Studios. Gain insights into the methods behind Jason's successful techniques, which have set the industry standard for renowned brands such as Allergan, Kybella, Evolus, Jeaveau, Revance, and more. Delve into a step-by-step understanding of the critical factors that contribute to the success of your practice's before and after photos, encompassing photography settings, backgrounds, lighting, positioning, and more. Acquire practical expertise in utilizing cutting-edge technologies to enhance patient photo documentation, privacy, and informed consent processes. Original Air Date: September 3rd 2024 - 5:00pm - 6:00 pm

Restart Recharge Podcast
414 Supporting New Teachers and Classroom Management

Restart Recharge Podcast

Play Episode Listen Later Sep 3, 2024 37:52 Transcription Available


Supporting new teachers during their first five years is crucial for their long-term success and retention in the profession. In this episode, Matthaeus Huelse and Tyler Irwin are joined by Dr. Andrew Kwok, an associate professor from Texas A&M University, to discuss the unique challenges new teachers face and how instructional coaches can make a significant impact. Dr. Kwok shares his extensive research on teacher preparation and classroom management, offering invaluable insights into how relational approaches and strategic support can transform the early teaching experience.Listeners will gain practical tips on building strong student relationships, managing classroom dynamics, and how coaches can tailor their support to meet the needs of novice educators. Whether you're a coach, administrator, or teacher, this episode provides actionable advice to help new teachers thrive.Coaching NetworkWe empower coaches with a holistic approach to implement practical skills and strategies that creates a wave of lasting change with the educators in their schools. We work to improve learning by being right there with you, on the ground, and in schools every day. Edge•U BadgesEdge•U is an anytime, anywhere professional learning platform made for teachers by teachers!

IMTalk
IMTalk Episode 941 – Kwok Yuen Teh

IMTalk

Play Episode Listen Later Aug 27, 2024 98:40


This week we interview Kwok Yuen Teh, who after 5 years in the sport achieve the sub 9 hour goal.   We also have News, Discussion of the Week, Website of the Week, and Q&As.

STANDARD H Podcast
Ep. 134 - Wes Kwok (Nodus Watches & Intersect Show)

STANDARD H Podcast

Play Episode Listen Later Jul 23, 2024 76:26


In the watch world, you have huge brands, big brands, independent brands, small brands, and even micro brands. But today's episode with Wes Kwok of Nodus Watches highlights some big moves. From starting with nothing, to reinvesting in his company with his partner, Cullen, only to begin producing parts for others, the journey has been no small task. Further, Wes and Nodus have led the charge for a new kind of watch show with Intersect, where the focus is to keep the crowds to a minimum while having a large, supportive community impact. This conversation even begins with Wes's recent big trip which you'll hear about. Links: STANDARD H https://standard-h.com/ @standardh_ Wes Kwok / Nodus Watches / Intersect @noduswatches @intersectwatchshow @wes_kwok

HBR IdeaCast
Why We Should Pay More Attention to Departing CEOs

HBR IdeaCast

Play Episode Listen Later Jul 9, 2024 28:50


When news breaks of a CEO succession, much of the attention is given to the new leader and how they will change the company. But new research shows that the leave-taking process of the outgoing chief executive is often mishandled, with negative impacts on succession and the organization. Rebecca Slan Jerusalim, an executive director at Russell Reynolds Associates, and Navio Kwok, a leadership advisor at RRA, say that boards are often surprised when a CEO gives notice, and they often make that person feel excluded during the handoff process. The researchers share stories from the front lines about CEO psychology, best practices for outgoing leaders and their boards, and broader lessons for effective transitions. Jerusalim and Kwok wrote the HBR article "The Vital Role of the Outgoing CEO."

Crypto Hipster Podcast
Charting New Paths for Humanity to Preserve the Potential of People Amidst an Ever-Increasing Artificial Intelligence Revolution, with Terence Kwok @ Humanity Protocol

Crypto Hipster Podcast

Play Episode Listen Later Jul 1, 2024 32:55


Terence Kwok, Founder and CEO of Human Institute Terence Kwok is the founder and CEO of Humanity Protocol, a leading organization dedicated to amplifying human potential amidst the transformative impact of artificial intelligence.  He is a visionary technology entrepreneur from Hong Kong and the CEO and founder of one of Asia's first unicorns. Kwok's journey has been marked by strategic partnerships with a broad spectrum of globally influential investors and partners, setting disruptive new industry standards.  With expertise in blockchain, Web3, and technology integration, Kwok's mission goes beyond technological advancement. He aims to use technology to bring people together, ensuring its benefits are widely accessible and impactful. His contributions are not only innovative but also pivotal in shaping the future of disruptive technologies. Recognized for his insightful vision, Kwok advocates for change that challenges conventions.  Kwok's entrepreneurial drive was nurtured at the University of Chicago, laying the foundation for his future endeavors. His journey from academia to the forefront of tech entrepreneurship highlights his influential role in pushing the boundaries of technology. --- Support this podcast: https://podcasters.spotify.com/pod/show/crypto-hipster-podcast/support

Tales From the Trail by MatchPlay

In this episode, Justin Chezem, head coach of Christopher Newport University Men's Soccer and I welcome Emily Kwok. In an effort to discuss the pathways and habits of high performers, Emily is highly qualified to bring that to our podcast. Emily was a world champion in Brazilian Jiu Jitsu and now works as a Peak Performance consultant with Josh Waitzkin. Their clients include professional sports teams, tech innovators, impact-oriented finance groups, and enterprises that are redefining their respective industries. There's a lot to learn from Emily and this discussion! Thank you to Adam Benayoun for the introduction!

Kunststof
Jean Kwok, schrijfster

Kunststof

Play Episode Listen Later Apr 4, 2024 48:45


Deze week verschijnt de nieuwe roman 'Zij die achterbleef' van schrijfster Jean Kwok. In het boek kruisen de levens van twee vrouwen: de Amerikaanse redacteur Rebecca Whitney, die met haar man een Chinese dochter adopteerde, én de Chinese Jasmine Yang, die haar leven in China achterliet om op zoek te gaan naar de dochter die haar werd afgenomen. Kwok is bekend van boeken als 'Girl in Translation' en 'Mambo in Chinatown'. Presentatie: Willemijn Veenhoven

Emergency Medical Minute
Podcast 896: Cancer-Related Emergencies

Emergency Medical Minute

Play Episode Listen Later Mar 25, 2024 2:30


Contributor: Travis Barlock, MD Educational Pearls: Cancer-related emergencies can be sorted into a few buckets: Infection Cancer itself and the treatments (chemotherapy/radiation) can be immunosuppressive. Look out for conditions such as sepsis and neutropenic fever. Obstruction Cancer causes a hypercoagulable state. Look out for blood clots which can cause emergencies such as a pulmonary embolism, stroke, superior vena cava (SVC) syndrome, and cardiac tamponade. Metabolic Cancer can affect the metabolic system in a variety of ways. For example, certain cancers like bone cancers can stimulate the bones to release large amounts of calcium leading to hypercalcemia. Tumor lysis syndrome is another consideration in which either spontaneously or due to treatment, tumor cells will release large amounts of electrolytes into the bloodstream causing hyperuricemia, hyperkalemia, hyperphosphatemia, and hypocalcemia. Medication side effect Immunomodulators can have strange side effects. A common one to know is Keytruda (pembrolizumab), which can cause inflammation in any organ. So if you have a cancer patient on immunomodulators with any inflammatory changes (cystitis, colitis, pneumonitis, etc), talk to oncology about whether steroids are indicated. Chemotherapy can cause tumor lysis syndrome (see above), and multiple chemotherapeutics are known to cause heart failure (doxorubicin, trastuzumab), kidney failure (cisplatin), and pulmonary toxicity (bleomycin). References Campello, E., Ilich, A., Simioni, P., & Key, N. S. (2019). The relationship between pancreatic cancer and hypercoagulability: a comprehensive review on epidemiological and biological issues. British journal of cancer, 121(5), 359–371. https://doi.org/10.1038/s41416-019-0510-x Gyamfi, J., Kim, J., & Choi, J. (2022). Cancer as a Metabolic Disorder. International journal of molecular sciences, 23(3), 1155. https://doi.org/10.3390/ijms23031155 Kwok, G., Yau, T. C., Chiu, J. W., Tse, E., & Kwong, Y. L. (2016). Pembrolizumab (Keytruda). Human vaccines & immunotherapeutics, 12(11), 2777–2789. https://doi.org/10.1080/21645515.2016.1199310 Wang, S. J., Dougan, S. K., & Dougan, M. (2023). Immune mechanisms of toxicity from checkpoint inhibitors. Trends in cancer, 9(7), 543–553. https://doi.org/10.1016/j.trecan.2023.04.002 Zimmer, A. J., & Freifeld, A. G. (2019). Optimal Management of Neutropenic Fever in Patients With Cancer. Journal of oncology practice, 15(1), 19–24. https://doi.org/10.1200/JOP.18.00269 Summarized by Jeffrey Olson MS2 | Edited by Meg Joyce & Jorge Chalit, OMSII  

BJJ Mental Models
Ep. 276: Polarity Mapping, feat. Emily Kwok

BJJ Mental Models

Play Episode Listen Later Mar 18, 2024 59:26


This week we're joined again by Emily Kwok!  Emily is a peak performance consultant and multi-time BJJ black belt world champion representing Marcelo Garcia.  In this episode, Emily introduces polarity mapping: a powerful tool for understanding nuance and breaking through either/or thinking.  We discuss polarities in Jiu-Jitsu, such as: tension/slack, fast/slow, retraction/extension, holding on/letting go, and concepts/techniques.Emily's website:https://www.emilykwok.com/Emily's Instagram:https://www.instagram.com/emilykwokbjjResources discussed in this episode:BJJMM Ep. 121: Kegan's Theory of Adult Development, feat. David Zeitlerhttps://bjj.plus/121Polarity Management, by Barry Johnsonhttps://amzn.to/3PpPMspMental models discussed in this episode:Probabilistic Thinkinghttps://bjjmentalmodels.com/probabilistic-thinking/Staying Loosehttps://bjjmentalmodels.com/staying-loose/Limb Coilinghttps://bjjmentalmodels.com/limb-coiling/Shuharihttps://bjjmentalmodels.com/shuhari/Concepts Over Techniqueshttps://bjjmentalmodels.com/concepts-over-techniques/Theory of Alignmenthttps://bjjmentalmodels.com/theory-of-alignment/Force Compressionhttps://bjjmentalmodels.com/force-compression/Don't forget to check out BJJ Mental Models Premium!If you love the podcast, you'll definitely love our premium membership offerings. The podcast is truly just the tip of the iceberg – the next steps on your journey are joining our community, downloading our strategy courseware, and working with us to optimize your game.  We do all this through memberships that come in at a fraction of the cost of a single private.Sign up here for a free trial:https://bjjmentalmodels.com/Need more BJJ Mental Models?Get tips, tricks, and breakthrough insights from our newsletter:https://bjjmentalmodels.com/newsletter/Get nitty-gritty details on our mental models from the full database:https://bjjmentalmodels.com/database/Follow us on social:https://facebook.com/bjjmentalmodels/https://instagram.com/bjjmentalmodels/Music by Enterprize:https://enterprize.bandcamp.com/

The Hong Kong History Podcast
How names can tell us a story, Part 1: Kwok Acheong

The Hong Kong History Podcast

Play Episode Listen Later Mar 18, 2024 46:35


Almost wherever you are there will be streets named after town worthies, or national eminences, or significant entities and events. Sometimes, particularly in larger towns, the names can reveal additional historical detail. What the main trades were and where they concentrated, for example. In Hong Kong over one hundred street names reveal details of Hong Kong's maritime story, particularly in its early decades. One of them, long lost – or perhaps mislaid – I have recently rediscovered. The streets – there were two – were named after a major early Chinese shipowner, mover and shaker. Kwok Acheong may not now be much celebrated, but he was one of the founders of the Tung Wah Hospital and at one time Hong Kong's biggest taxpayer. 

Spooked!
Ep. 425 – Toronto Comicon 2024 with Patrick Kwok-Choon

Spooked!

Play Episode Listen Later Mar 17, 2024 42:53


The Spooky Comicon 2024 Today on Spooked! We're live from Toronto Comicon with Star Trek: Discovery's own, Patrick Kwok-Choon! It's all about hauntings, rest stops and haunted candy! So grab your passes, get in line, and get ready to get Spooked! Brought to you By: The Sonar Network https://thesonarnetwork.com/

Wake Up and Win with DeVon Pouncey
Episode 243: "The Voice of the Remix" Featuring Gareth Kwok

Wake Up and Win with DeVon Pouncey

Play Episode Listen Later Feb 29, 2024 86:24


On this episode we are joined by the Rip City Remix PxP voice Gareth Kwok to discuss his journey leading up to working for the Remix (19:48), his experience working for the Remix and more!

Making Time
Making Time EPISODE 9 | Wesley Kwok (Nodus Watches)

Making Time

Play Episode Listen Later Feb 16, 2024 51:40


Welcome to MakingTime, the podcast that takes you behind the scenes of the watch industry. In this episode, we have Wesley from Nodus Watches who will be sharing their insights on the inner workings of their business. Follow Nodus Watches: ▶︎ Instagram▶︎ WebChapters:00:00:00-Wesley Kwok: From Music to Watches00:05:52-The Journey of Watch Collecting00:11:30-Design Iterations in Watchmaking00:17:13-Creative Freedom and Design Language00:23:00-Building a Community and Telling Stories00:28:53-Expanding the Industry's Reach and Collaboration00:34:23-The Evolution of Micro Brands00:40:07-Defining a "Micro Brand"00:45:53-Doing Things No One Else Has DoneTune in to MakingTime to gain a deeper understanding of the watch industry and to hear from the experts themselves. Don't miss this insightful episode that will leave you with a newfound appreciation for the craftsmanship and artistry that goes into creating every timepiece. Subscribe to our YouTube channel to stay updated on all our episodes.Thank you for joining us on this journey into the heart of the watch industry. Stay tuned for more behind the scenes insights on MakingTime.▶︎ Watch the podcast on YouTube.Follow Z.A Strap Company for more:▶︎ Website▶︎ Instagram#makingtime #zulualphastraps #watchpodcast Recorded and Produced by Liverpool Podcast Studios ▶︎ Web ▶︎ Instagram ▶︎ LinkedIn

The Joe Beaver Show
The Joe Beaver Show 2-1 D1Baseball's Aaron Fitt, OSU Gymnastics Associate HC Michael Chaplin, Rip City Remix (Trailblazers G-League Affiliate) PxP Broadcaster Gareth Kwok

The Joe Beaver Show

Play Episode Listen Later Feb 1, 2024 98:28


Over A Pint Marketing Podcast
Connie Kwok, Digital Strategist: For J.P. Morgan, The Hershey Co. AMEX, Diageo Brands, and Blockstream

Over A Pint Marketing Podcast

Play Episode Listen Later Jan 30, 2024 55:07


#106 Today we talk to Connie Kwok.    Connie and Pat met over 20 years ago when she was just starting out.  And once she left the agency, her career took off like a rocket.    Connie has worked on the agency and client side for some of the biggest brands on the planet and in so many different verticals – it's sick!   And because that wasn't cool enough she left the “traditional brand world” and jumped into the world of BitCoin and Blockchain.    Here's what to listen for in this episode:    ➤ Connie's playbook when she comes into a new company. What it's like working with these big “heritage” brands and why the brand story is important in creating a digital strategy.    ➤ Why a brand story is so important when it comes to developing a digital strategy.    ➤ How she goes about building her teams – and her one key question when she is meeting new team members.    Connie was awesome – you'll love hearing her story. If you liked this one, take a listen to the episode last week with Jay Baer.   ✅ Get in touch with Kurt at: https://www.linkedin.com/in/kurtlingel/ ✅ Connect with Pat at: pmcgovern@ascedia.com

Morbid
Episode 533: The Mysterious Death of Charles Morgan

Morbid

Play Episode Listen Later Jan 29, 2024 69:11


In March 1977, Arizona businessman Charles Morgan went missing from his home in Tucson, only to turn up three days later in the middle of the night, shoeless, traumatized, and with broken plastic handcuffs on his wrists and ankles. Unable to speak, Charles wrote that he had been drugged by an unnamed individual and kidnapped, but he refused to let his wife call the police or otherwise report the assault. Three months later, Charles Morgan's body was discovered in the desert with a gunshot wound in the back of his head, one of his teeth wrapped in a handkerchief, and a two-dollar bill pinned to his underwear.From the outside, Charles Morgan appeared to live a very normal and decidedly unexciting life. Yet when investigators began digging into his background to find out who would have wanted him dead, they discovered a complicated and bizarre story of supposed government agents, mobsters, and a mystery that one would have expected from a Hollywood screenplay, not the life of a middle-aged Arizona escrow agent. The increasingly bizarre details of Morgan's life and death comprise a fascinating mystery that remains unsolved to this day and endures as one of Arizona's most baffling cold cases.Thank you to David White, of the Bring Me the Axe podcast, for research assistanceReferencesBassett, Edward, and David Dykes. 1977. "Mystery death a suicide?" Tucson Citizen, June 22: 1.Bassett, Edward, and Richard Wood. 1977. "Slain businessman's bank dealings probed." Tucson Citizen, June 27: 3.Flanagan, Ray. n.d. "Did 'hit-man."—. 1990. "Did 'hit-man' with ties to region figure in Arizona death case?" Tribune, September 25: 3.Heltsley, Ernie, and John Rawlinson. 1979. "1977 shooting ended Tucsonan's two lives." Arizona Daily Star, February 4: 1.Jordan, Tracy. 1990. "City residents asked to drop a dime on hit man." Times Leader, October 22: 3.Kwok, Abraham. 1992. "Phoenix death a mistaken 'hit'?" Arizona Republic, May 6: 10.Matas, Kimberly. 2010. "Strange evidence found in '77 on, near man's body." Arizona Daily Star, March 31: A08.1990. Unsolved Mysteries. Directed by John McLaughlin. Performed by John McLaughlin.Salkowski, Joe, and Enric Volante. 2002. "Mob faded locally long before key figure died." Arizona Daily Star, May 19: 1.Svejcara, Bob. 1977. "Sheriff finds no foul play in Morgan death." Arizona Daily Star, August 11: 13.Svejcara, Bob, and Ernie Heltsley. 1977. "Slain businessman seen during 'absence'." Arizona Daily Star, June 23: 1.Tucson Citizen. 1977. "Sheriff's probe says Morgan was a sucide." Tucson Citizen, August 11: 4.Wood, Richard. 1977. "Slain Tucson executive: solid citizen... mystery man." Tucson Citizen, June 21: 2.—. 1977. "Woman says Morgan hid, trying to buy off his life." Tucson Citizen, June 21: 1.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Madang
Madang Podcast: Kwok Pui Lan, Ep. 35

Madang

Play Episode Listen Later Dec 20, 2023 59:58


Welcome to ⁠⁠⁠Madang Podcast.⁠⁠⁠ Madang is the outdoor living room of the world. Here, we invite you to sit and tune into unreserved, remarkable conversations with renown authors, leaders, public figures and scholars on religion, culture and everything in-between. This has been a dream of mine for many years and now it is a reality. Please join me at Madang Podcast hosted by the ⁠⁠⁠Christian Century⁠⁠⁠. This is the 35th episode of Madang where I converse with Kwok Pui Lan on her book, The Anglican Tradition from a Postcolonial Perspective. Kwok is Dean's Professor of Systematic Theology & Special Advisor to the Dean for Strategic Changes at Candler School of Theology. Kwok is the former William F. Cole Professor of Christian Theology and Spirituality at Episcopal Divinity School. Kwok's research focuses on Asian feminist theology and postcolonial theology. She has written or edited 23 books in English and Chinese, including Postcolonial Politics and Theology and The Hong Kong Protests and Political. Her current research focuses on the practice of postcolonial theology. On this episode, Kwok talks with me about The Anglican Tradition from a Postcolonial Perspective. the Anglican church, forgiveness, women's ordination, Desmond Tutu and so much more. You can also listen to the podcast on ⁠⁠Spotify⁠⁠ and ⁠⁠Apple Podcasts⁠⁠. I am grateful to ⁠⁠⁠Homebrewed Christianity⁠, Candler School of Theology⁠ and ⁠FACE, for their sponsorship of this episode. Please check out their website for their work, events and to donate. Please reach out to me if you would like to sponsor the next episode of Madang podcast. Or simply ⁠⁠support me here. --- Support this podcast: https://podcasters.spotify.com/pod/show/grace-ji-sun-kim/support

The Credit Edge by Bloomberg Intelligence
Private Credit 2024 Outlook; Asean Bank Resilience

The Credit Edge by Bloomberg Intelligence

Play Episode Listen Later Dec 15, 2023 33:18 Transcription Available


Private credit will hog the limelight in 2024, with ever-larger deals and continued expansion, even as high rates and a slowing economy add risks. To discuss the outlook for next year, Paula Seligson and Lisa Lee — senior reporters in Bloomberg's global private credit news team — join senior editor James Crombie in the latest episode of the Credit Edge podcast. Private debt will likely attract more investors — and the attention of regulators seeking transparency. Also in this episode, Bloomberg Intelligence credit analyst Rena Kwok weighs the resilience of Asean banks amid a Chinese economic slowdown. She identifies relative value in Bangkok Bank bonds and sees risks across the board from elevated interest rates. Private credit activity has been muted in the region, but it's something to watch for next year, Kwok says.See omnystudio.com/listener for privacy information.

Kenny's G League
A Chat with Gareth Kwok, Play-by-Play Announcer of the Remix

Kenny's G League

Play Episode Listen Later Dec 14, 2023 28:44


Thank you to Gareth for joining to show! Find him @HeyGKwok on Twitter/XBrowse the offerings from our sponsor at basketghoul.com

Taiwanese Diaspora 台灣人 Podcast
#66: Moving a startup from Silicon Valley to Taiwan | Dr. Amy Kwok, CEO of Penguin Smart

Taiwanese Diaspora 台灣人 Podcast

Play Episode Listen Later Nov 24, 2023 40:55


Interview with Amy Kwok, CEO of Penguin Smart, on taking an MIT Development Lab idea to the MIT $100K Entrepreneurship Competition, and forming a startup in Silicon Valley before moving it to Taiwan. Penguin Smart offers digital solutions for parent-centered speech intervention. Crowdfunding support (ends Nov 27) https://wabay.tw/projects/slower-flying-angels-support?locale=zh-TW English website: https://www.mypenguinsmart.com/ Traditional Chinese website: https://tcn.mypenguinsmart.com/ For a shortened Mandarin version of the interview, check out Episode 67. --- Support this podcast: https://podcasters.spotify.com/pod/show/twdiaspora/support

Taiwanese Diaspora 台灣人 Podcast
#67: 聊在美國新創的公司搬到台灣 | Dr. Amy Kwok, 啟兒寶的 CEO

Taiwanese Diaspora 台灣人 Podcast

Play Episode Listen Later Nov 24, 2023 23:46


Dr. Amy Kwok 是啟兒寶的 CEO。 今天介紹在美國創業的過程,然後把公司搬到台灣的經驗。 啟兒寶PenguinSmart發起於美國矽谷,由麻省理工、哈佛校友聯合創辦,推動智慧型兒童語言復健支持服務,我們的服務很簡單,透過科技結合醫療,擴大早療場域、為家長賦能、為孩子加油。 現在,邀請您一起成為偏鄉和早療孩子復健過程的重要參與者。以啟兒寶PenguinSmart專業團隊開發的科技和遠距早療課程,協助遲緩兒家庭、提供個別化的支持。您的贊助將支持弱勢遲緩兒家庭,讓他們不再被邊緣化,透過在家自主訓練,為孩子營造更多發展機會,大步迎接希望未來。 幫忙支持眾籌目標 (到 11 月 27 日) ⁠ https://wabay.tw/projects/slower-flying-angels-support?locale=zh-TW⁠ 英文網站: ⁠https://www.mypenguinsmart.com/⁠ 中文網站: ⁠https://tcn.mypenguinsmart.com/⁠ For the English version of the interview, check out Episode 66. --- Support this podcast: https://podcasters.spotify.com/pod/show/twdiaspora/support

Digital Social Hour
Gav Kwok on Making $100M on Amazon, Traveling to 100 Countries & Pro Tennis Digital Social Hour #134

Digital Social Hour

Play Episode Listen Later Oct 19, 2023 36:53


On today's episode of the Digital Social Hour, we sit down with Gav Kwok and talk about what level of wealth is comfortable, backpacking through Europe & growing up in Australia. BUSINESS INQUIRIES/SPONSORS: Jenna@DigitalSocialHour.com APPLY TO BE ON THE POD: https://forms.gle/qXvENTeurx7Xn8Ci9 SPONSORS: Opus Pro: https://www.opus.pro/?via=DSH HelloFresh: https://www.hellofresh.com/50dsh AG1: https://www.drinkAG1.com/DSH Hostage Tape: https://hostagetape.com/DSH LISTEN ON: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759 Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Digital Social Hour
Gav Kwok on Making $100M on Amazon, Traveling to 100 Countries & Pro Tennis Digital Social Hour #134

Digital Social Hour

Play Episode Listen Later Oct 19, 2023 39:37


On today's episode of the Digital Social Hour, we sit down with Gav Kwok and talk about what level of wealth is comfortable, backpacking through Europe & growing up in Australia. BUSINESS INQUIRIES/SPONSORS: Jenna@DigitalSocialHour.com APPLY TO BE ON THE POD: https://forms.gle/qXvENTeurx7Xn8Ci9 SPONSORS: Opus Pro: https://www.opus.pro/?via=DSH HelloFresh: https://www.hellofresh.com/50dsh AG1: https://www.drinkAG1.com/DSH Hostage Tape: https://hostagetape.com/DSH LISTEN ON: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759 Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 17,000 people who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out the State of AI Engineering survey! See our Community page for upcoming meetups in SF, Paris and NYC.This episode had good interest on Twitter.Fast.ai's “Practical Deep Learning” courses been watched by over >6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn't always the case. Being non-consensus and right In 2018, Jeremy and Sebastian Ruder published a paper on ULMFiT (Universal Language Model Fine-tuning), a 3-step transfer learning technique for NLP tasks: The paper demonstrated that pre-trained language models could be fine-tuned on a specific task with a relatively small amount of data to achieve state-of-the-art results. They trained a 24M parameters model on WikiText-103 which was beat most benchmarks.While the paper had great results, the methods behind weren't taken seriously by the community: “Everybody hated fine tuning. Everybody hated transfer learning. I literally did tours trying to get people to start doing transfer learning and nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning […] which I was convinced was not the right direction, but who's going to listen to me, cause as you said, I don't have a PhD, not at a university… I don't have a big set of computers to fine tune huge transformer models.”Five years later, fine-tuning is at the center of most major discussion topics in AI (we covered some like fine tuning vs RAG and small models fine tuning), and we might have gotten here earlier if Jeremy had OpenAI-level access to compute and distribution. At heart, Jeremy has always been “GPU poor”:“I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use.”This story is a good reminder of how some of the best ideas are hiding in plain sight; we recently covered RWKV and will continue to highlight the most interesting research that isn't being done in the large labs. Replacing fine-tuning with continued pre-trainingEven though fine-tuning is now mainstream, we still have a lot to learn. The issue of “catastrophic forgetting” and potential solutions have been brought up in many papers: at the fine-tuning stage, the model can forget tasks it previously knew how to solve in favor of new ones. The other issue is apparent memorization of the dataset even after a single epoch, which Jeremy covered Can LLMs learn from a single example? but we still don't have the answer to. Despite being the creator of ULMFiT, Jeremy still professes that there are a lot of open questions on finetuning:“So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do.”He now advocates for "continued pre-training" - maintaining a diversity of data throughout the training process rather than separate pre-training and fine-tuning stages. Mixing instructional data, exercises, code, and other modalities while gradually curating higher quality data can avoid catastrophic forgetting and lead to more robust capabilities (something we covered in Datasets 101).“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data….So yeah, that's now my view, is I think ULMFiT is the wrong approach. And that's why we're seeing a lot of these so-called alignment tax… I think it's actually because people are training them wrong.An example of this phenomena is CodeLlama, a LLaMA2 model finetuned on 500B tokens of code: while the model is much better at code, it's worse on generic tasks that LLaMA2 knew how to solve well before the fine-tuning. In the episode we also dive into all the places where open source model development and research is happening (academia vs Discords - tracked on our Communities list and on our survey), and how Jeremy recommends getting the most out of these diffuse, pseudonymous communities (similar to the Eleuther AI Mafia).Show Notes* Jeremy's Background* FastMail* Optimal Decisions* Kaggle* Enlitic* fast.ai* Rachel Thomas* Practical Deep Learning* fastai for PyTorch* nbdev* fastec2 (the underrated library we describe)* Can LLMs learn from a single example?* the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”.* Sebastian Ruder* Alec Radford* Sylvain Gugger* Stephen Merity* Chris Lattner* Modular.ai / Mojo* Jono Whittaker* Zeiler and Fergus paper* ULM Fit* DAWNBench* Phi-1* Code Llama* AlexNetTimestamps* [00:00:00] Intros and Jeremy's background* [00:05:28] Creating ULM Fit - a breakthrough in NLP using transfer learning* [00:06:32] The rise of GPT and the appeal of few-shot learning over fine-tuning* [00:10:00] Starting Fast.ai to distribute AI capabilities beyond elite academics* [00:14:30] How modern LMs like ChatGPT still follow the ULM Fit 3-step approach* [00:17:23] Meeting with Chris Lattner on Swift for TensorFlow at Google* [00:20:00] Continued pre-training as a fine-tuning alternative* [00:22:16] Fast.ai and looking for impact vs profit maximization* [00:26:39] Using Fast.ai to create an "army" of AI experts to improve their domains* [00:29:32] Fast.ai's 3 focus areas - research, software, and courses* [00:38:42] Fine-tuning memorization and training curve "clunks" before each epoch* [00:46:47] Poor training and fine-tuning practices may be causing alignment failures* [00:48:38] Academia vs Discords* [00:53:41] Jeremy's high hopes for Chris Lattner's Mojo and its potential* [01:05:00] Adding capabilities like SQL generation through quick fine-tuning* [01:10:12] Rethinking Fast.ai courses for the AI-assisted coding era* [01:14:53] Rapid model development has created major technical debt* [01:17:08] Lightning RoundAI Summary (beta)This is the first episode we're trying this. Here's an overview of the main topics before you dive in the transcript. * Jeremy's background and philosophies on AI* Studied philosophy and cognitive science in college* Focused on ethics and thinking about AI even 30 years ago* Believes AI should be accessible to more people, not just elite academics/programmers* Created fast.ai to make deep learning more accessible* Development of transfer learning and ULMFit* Idea of transfer learning critical for making deep learning accessible* ULMFit pioneered transfer learning for NLP* Proposed training general language models on large corpora then fine-tuning - this became standard practice* Faced skepticism that this approach would work from NLP community* Showed state-of-the-art results on text classification soon after trying it* Current open questions around fine-tuning LLMs* Models appear to memorize training data extremely quickly (after 1 epoch)* This may hurt training dynamics and cause catastrophic forgetting* Unclear how best to fine-tune models to incorporate new information/capabilities* Need more research on model training dynamics and ideal data mixing* Exciting new developments* Mojo and new programming languages like Swift could enable faster model innovation* Still lots of room for improvements in computer vision-like innovations in transformers* Small models with fine-tuning may be surprisingly capable for many real-world tasks* Prompting strategies enable models like GPT-3 to achieve new skills like playing chess at superhuman levels* LLMs are like computer vision in 2013 - on the cusp of huge new breakthroughs in capabilities* Access to AI research* Many key convos happen in private Discord channels and forums* Becoming part of these communities can provide great learning opportunities* Being willing to do real work, not just talk about ideas, is key to gaining access* The future of practical AI* Coding becoming more accessible to non-programmers through AI assistance* Pre-requisite programming experience for learning AI may no longer be needed* Huge open questions remain about how to best train, fine-tune, and prompt LLMsTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:21]Swyx: Hey, and today we have in the remote studio, Jeremy Howard all the way from Australia. Good morning. [00:00:27]Jeremy: The remote studio, also known as my house. Good morning. Nice to see you. [00:00:32]Swyx: Nice to see you too. I'm actually very used to seeing you in your mask as a message to people, but today we're mostly audio. But thank you for doing the very important public service of COVID awareness. It was a pleasure. [00:00:46]Jeremy: It was all very annoying and frustrating and tedious, but somebody had to do it. [00:00:52]Swyx: Somebody had to do it, especially somebody with your profile. I think it really drives home the message. So we tend to introduce people for them and then ask people to fill in the blanks on the personal side. Something I did not know about you was that you graduated with a BA in philosophy from the University of Melbourne. I assumed you had a PhD. [00:01:14]Jeremy: No, I mean, I barely got through my BA because I was working 80 to 100 hour weeks at McKinsey and Company from 19 years old onwards. So I actually didn't attend any lectures in second and third year university. [00:01:35]Swyx: Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. [00:01:39]Jeremy: I took two weeks off before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight, I would go to all my professors and say, oh, I was meant to be in your class this semester and I didn't quite turn up. Were there any assignments I was meant to have done, whatever. I can't believe all of them let me basically have it. They basically always would say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university, but. [00:02:12]Swyx: Well, it shows that, I guess, you min-maxed the opportunities. That definitely was a precursor. [00:02:18]Jeremy: I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was ethics and cognitive science. And it's kind of really amazing that it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. [00:02:38]Swyx: A lot of, yeah, a lot of relevant conversations there. So you were a consultant for a while and then in the magical month of June 1989, you founded both Optimal Decisions and Fastmeal, which I also briefly used. So thank you for that. [00:02:53]Jeremy: Oh, good for you. Yeah. Cause I had read the statistics, which is that like 90% or something of small businesses fail. So I thought if I start two businesses, I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd, but anyway. [00:03:10]Swyx: And then you were president and chief scientist at Kaggle, which obviously is the sort of composition platform of machine learning. And then Enlitic, where you were working on using deep learning to improve medical diagnostics and clinical decisions. Yeah. [00:03:28]Jeremy: I was actually the first company to use deep learning in medicine, so I kind of founded the field. [00:03:33]Swyx: And even now that's still like a pretty early phase. And I actually heard you on your new podcast with Tanish, where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. [00:03:47]Jeremy: Maybe he's too old to be called a prodigy now, ex-prodigy. No, no. [00:03:51]Swyx: I think he still counts. And anyway, just to round out the bio, you have a lot more other credentials, obviously, but most recently you started Fast.ai, which is still, I guess, your primary identity with Rachel Thomas. So welcome. [00:04:05]Jeremy: Yep. [00:04:06]Swyx: Thanks to my wife. Thank you. Yeah. Doing a lot of public service there with getting people involved in AI, and I can't imagine a better way to describe it than fast, fast.ai. You teach people from nothing to stable diffusion in seven weeks or something, and that's amazing. Yeah, yeah. [00:04:22]Jeremy: I mean, it's funny, you know, when we started that, what was that, like 2016 or something, the idea that deep learning was something that you could make more accessible was generally considered stupid. Everybody knew that deep learning was a thing that you got a math or a computer science PhD, you know, there was one of five labs that could give you the appropriate skills and that you would join, yeah, basically from one of those labs, you might be able to write some papers. So yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it. And we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs, that seemed like A, a big waste and B, kind of dangerous. [00:05:28]Swyx: Yeah. [00:05:29]Alessio: And, you know, well, I just wanted to know one thing on your bio that at Kaggle, you were also the top rank participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you were clearly building something that you knew a lot about, which is awesome. Talking about deep learning, you created, published a paper on ULM fit, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I've read back on the paper and you turned this model, AWD LSTM, which I did the math and it was like 24 to 33 million parameters, depending on what training data set you use today. That's kind of like not even small, it's like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and what people were working towards? [00:06:32]Jeremy: Yeah, the whole thing was a contrarian take, you know. So okay, so we started Fast.ai, my wife and I, and we thought, yeah, so we're trying to think, okay, how do we make it more accessible? So when we started thinking about it, it was probably 2015 and then 2016, we started doing something about it. Why is it inaccessible? Okay, well, A, no one knows how to do it other than a few number of people. And then when we asked those few number of people, well, how do you actually get good results? They would say like, oh, it's like, you know, a box of tricks that aren't published. So you have to join one of the labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but thankfully there was Theano and rappers and particularly Lasagna, the rapper, but yeah, not much software around, not much in the way of data sets, you know, very hard to get started in terms of the compute. Like how do you get that set up? So yeah, no, everything was kind of inaccessible. And you know, as we started looking into it, we had a key insight, which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about, nobody talks about called transfer learning, where you take somebody else's model, where they already figured out like how to detect edges and gradients and corners and text and whatever else, and then you can fine tune it to do the thing you want to do. And we thought that's the key. That's the key to becoming more accessible in terms of compute and data requirements. So when we started Fast.ai, we focused from day one on transfer learning. Lesson one, in fact, was transfer learning, literally lesson one, something not normally even mentioned in, I mean, there wasn't much in the way of courses, you know, the courses out there were PhD programs that had happened to have recorded their lessons and they would rarely mention it at all. We wanted to show how to do four things that seemed really useful. You know, work with vision, work with tables of data, work with kind of recommendation systems and collaborative filtering and work with text, because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And no one was doing anything much useful with text. Everybody was talking about word2vec, you know, like king plus queen minus woman and blah, blah, blah. It was like cool experiments, but nobody's doing anything like useful with it. NLP was all like lemmatization and stop words and topic models and bigrams and SPMs. And it was really academic and not practical. But I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done cognitive science at university, where we talked a lot about the CELS Chinese room experiment. This idea of like, what if there was somebody that could kind of like, knew all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese and they were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them and then they do all their rules and then they pass back a piece of paper with Chinese back. And this room with a person in is actually fantastically good at answering any question you give them written in Chinese. You know, do they understand Chinese? And is this, you know, something that's intelligently working with Chinese? Ever since that time, I'd say the most thought, to me, the most thoughtful and compelling philosophical response is yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But you know, if it looks like a duck and acts like a duck, it's a duck, you know, or to all intents and purposes. And so I always kind of thought, you know, so this is basically a kind of analysis of the limits of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw to then generate text in response to text, it could appear to be intelligent, you know. And whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know, what I learned about the universal approximation theorem and stuff, and I started thinking like, oh, I wonder if like a neural net could ever get big enough and take in enough data to be a Chinese room experiment. You know, with that background and this kind of like interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, oh, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing, you know, they were not a popular or well-known thing, but they were a thing. But language models exist to this idea that you could train a model to fill in the gaps. Or actually in those days it wasn't fill in the gaps, it was finish a string. And in fact, Andrej Karpathy did his fantastic RNN demonstration from this at a similar time where he showed like you can have it ingest Shakespeare and it will generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia effectively, to do it quite accurately quite often? I thought, geez, it would actually have to know a lot about the world, you know, it'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways and that causes proceed effects and that, you know, when there are animals and there are people and that people can be in certain positions during certain timeframes and then you could, you know, all that together, you can then finish a sentence like this was signed into law in 2016 by US President X and it would fill in the gap, you know. So that's why I tried to create what in those days was considered a big language model trained on the entirety on Wikipedia, which is that was, you know, a bit unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure, based on our work with transfer learning and vision, that I could then suck out those latent capabilities by transfer learning, you know, by fine-tuning it on a task data set or whatever. So we generated this three-step system. So step one was train a language model on a big corpus. Step two was fine-tune a language model on a more curated corpus. And step three was further fine-tune that model on a task. And of course, that's what everybody still does today, right? That's what ChatGPT is. And so the first time I tried it within hours, I had a new state-of-the-art academic result on IMDB. And I was like, holy s**t, it does work. And so you asked, to what degree was this kind of like pushing against the established wisdom? You know, every way. Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like, oh, maybe. Everybody was like, it definitely won't work. NLP is much more complicated than vision. Language is a much more vastly complicated domain. You know, and you've got problems like the grounding problem. We know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. [00:15:10]Alessio: Jeremy, had people not tried because it was like too complicated to actually get the data and like set up the training? Or like, were people just lazy and kind of like, hey, this is just not going to work? [00:15:20]Jeremy: No, everybody wasn't lazy. So like, so the person I thought at that time who, you know, there were two people I thought at that time, actually, who were the strongest at language models were Stephen Merity and Alec Radford. And at the time I didn't know Alec, but I, after we had both, after I'd released ULM Fit and he had released GPT, I organized a chat for both of us with Kate Metz in the New York Times. And Kate Metz answered, sorry, and Alec answered this question for Kate. And Kate was like, so how did, you know, GPT come about? And he said, well, I was pretty sure that pre-training on a general large corpus wouldn't work. So I hadn't tried it. And then I read ULM Fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similar with, with Stephen, you know, I asked Stephen Merity, like, why don't we just find, you know, take your AWD-ASTLM and like train it on all of Wikipedia and fine tune it? And he's kind of like, well, I don't think that's going to really lie. Like two years before I did a very popular talk at KDD, the conference where everybody in NLP was in the audience. I recognized half the faces, you know, and I told them all this, I'm sure transfer learning is the key. I'm sure ImageNet, you know, is going to be an NLP thing as well. And, you know, everybody was interested and people asked me questions afterwards and, but not just, yeah, nobody followed up because everybody knew that it didn't work. I mean, even like, so we were scooped a little bit by Dai and Lee, Kwok Lee at Google. They had, they had, I already, I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose, large language model on a general purpose corpus. They only ever tested a domain specific corpus. And I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it was that kind of 30 years of mulling over the, the cell Chinese room experiment that had convinced me that it probably would work. I don't know. Yeah. [00:17:48]Alessio: Interesting. I just dug up Alec announcement tweet from 2018. He said, inspired by Cobe, Elmo, and Yola, I'm fit. We should have a single transformer language model can be fine tuned to a wide variety. It's interesting because, you know, today people think of AI as the leader, kind of kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years, people think of it as an overnight success, but obviously it took a while. [00:18:16]Swyx: Yeah. Yeah. [00:18:17]Jeremy: No, I mean, absolutely. And I'll say like, you know, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to, to ULM fit. You know, there was these kind of like, so there was a lot of, there was a lot of activity at the same time as ULM fits released. So there was, um, so before it, as Brian McCann, I think at Salesforce had come out with this neat model that did a kind of multitask learning, but again, they didn't create a general fine tune language model first. There was Elmo, um, which I think was a lip, you know, actually quite a few months after the first ULM fit example, I think. Um, but yeah, there was a bit of this stuff going on. And the problem was everybody was doing, and particularly after GPT came out, then everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning. Everybody hated transfer learning. And like, I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years and, and not to be honest, I mean, I'm a bit sad about this now, but I kind of got so disappointed and dissuaded by like, it felt like these bigger lab, much bigger labs, you know, like fast AI had only ever been just me and Rachel were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced was the wrong way to do it. And so, yeah, for years people were really focused on getting better at zero shot and few shots and it wasn't until, you know, this key idea of like, well, let's take the ULM fit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's fine tune on an instruction corpus. And then in step three, rather than fine tuning on a reasonably specific task classification, let's fine tune on a, on a RLHF task classification. And so that was really, that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction, but who's going to listen to me, you know, cause I, as you said, I don't have a PhD, not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So yeah, it was definitely difficult. It's always been hard. You know, it's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata and it'd be like, oh, this big bank has this huge room full of computers and they have like terabytes of data available, you know, at the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then of course, students coming out of their PhDs and stuff, that's where they want to go work because that's where they read about. And to me, it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. [00:22:16]Alessio: That's awesome. And it's great to hear, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love this conversation. And the other thing before we jump into Fast.AI, actually, a lot of people that I know, they run across a new architecture and whatnot, they're like, I got to start a company and raise a bunch of money and do all of this stuff. And say, you were like, I want everybody to have access to this. Why was that the case for you? Was it because you already had a successful venture in like FastMail and you were more interested in that? What was the reasoning? [00:22:52]Jeremy: It's a really good question. So I guess the answer is yes, that's the reason why. So when I was a teenager, I thought it would be really cool to like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word VC. And I didn't really know what any of those things were really until after we started Kaggle, to be honest. Even the way it started to what we now call startups. I just thought they were just small businesses. You know, they were just companies. So yeah, so those two companies were FastMail and Optimal Decisions. FastMail was the first kind of synchronized email provider for non-businesses. So something you can get your same email at home, on your laptop, at work, on your phone, whatever. And then Optimal Decisions invented a new approach to insurance pricing. Something called profit-optimized insurance pricing. So I saw both of those companies, you know, after 10 years. And at that point, I had achieved the thing that as a teenager I had wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mom saying to me, you must be so proud. You know, because she remembered my dream. She's like, you've done it. And I kind of reflected and I was like, I'm not proud at all. You know, like people quite liked FastMail. You know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah, I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually, I haven't really helped the world very much. You know, maybe in the insurance case I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure would have a lot of value. So, you know, I took some time off. I wasn't sure if I'd ever work again, actually. I didn't particularly want to, because it felt like, yeah, it felt like such a disappointment. And, but, you know, and I didn't need to. I had enough money. Like, I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work, they all worked ridiculously hard, you know, and constantly put themselves in extremely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, Kaggle came along and I mainly kind of did that just because it was fun and interesting to hang out with interesting people. But, you know, with Fast.ai in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know? And so we definitely saw the world going in a potentially pretty dystopian direction if the world's most powerful technology was controlled by a small group of elites. So we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've helped make it a little bit less likely. So we've done our bit. [00:26:39]Swyx: You've shown that it's possible. And I think your constant advocacy, your courses, your research that you publish, you know, just the other day you published a finding on, you know, learning that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howards, furthering your mission with, you know, you don't have to do everything by yourself is what I'm saying. No, definitely. Definitely. [00:27:10]Jeremy: You know, that was a big takeaway from like, analytic was analytic. It definitely felt like we had to do everything ourselves. And I kind of, I wanted to solve medicine. I'll say, yeah, okay, solving medicine is actually quite difficult. And I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was definitely the other piece was like, yeah, you know, can we create an army of passionate domain experts who can change their little part of the world? And that's definitely happened. Like I find nowadays, at least half the time, probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say, they say, yeah, I got my start with fast.ai. So it's definitely, I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff, which, you know, there's lots of alumni there. And they say, oh my God, I got here. And like half of the people are fast.ai alumni. So it's fantastic. [00:28:13]Swyx: Yeah. [00:28:14]Jeremy: Actually, Andre Kapathy grabbed me when I saw him at NeurIPS a few years ago. And he was like, I have to tell you, thanks for the fast.ai courses. When people come to Tesla and they need to know more about deep learning, we always send them to your course. And the OpenAI Scholars Program was doing the same thing. So it's kind of like, yeah, it's had a surprising impact, you know, that's just one of like three things we do is the course, you know. [00:28:40]Swyx: Yes. [00:28:40]Jeremy: And it's only ever been at most two people, either me and Rachel or me and Sylvia nowadays, it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to make an impact. [00:28:56]Swyx: Yeah. So just to reintroduce fast.ai for people who may not have dived into it much, there is the courses that you do. There is the library that is very well loved. And I kind of think of it as a nicer layer on top of PyTorch that people should start with by default and use it as the basis for a lot of your courses. And then you have like NBDev, which I don't know, is that the third one? [00:29:27]Jeremy: Oh, so the three areas were research, software, and courses. [00:29:32]Swyx: Oh, sorry. [00:29:32]Jeremy: So then in software, you know, fast.ai is the main thing, but NBDev is not far behind. But then there's also things like FastCore, GHAPI, I mean, dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden, actually. Some of them I should try to do a better job of telling people about. [00:30:01]Swyx: What are you thinking about? Yeah, what's on the course of my way? Oh, I don't know, just like little things. [00:30:04]Jeremy: Like, for example, for working with EC2 and AWS, I created a FastEC2 library, which I think is like way more convenient and nice to use than anything else out there. And it's literally got a whole autocomplete, dynamic autocomplete that works both on the command line and in notebooks that'll like auto-complete your instance names and everything like that. You know, just little things like that. I try to make like, when I work with some domain, I try to make it like, I want to make it as enjoyable as possible for me to do that. So I always try to kind of like, like with GHAPI, for example, I think that GitHub API is incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So like GHAPI, like FastEC2, it like autocompletes both at the command line or in a notebook or whatever, like literally the entire GitHub API. The entire thing is like, I think it's like less than 100K of code because it actually, as far as I know, the only one that grabs it directly from the official open API spec that GitHub produces. And like if you're in GitHub and you just type an API, you know, autocomplete API method and hit enter, it prints out the docs with brief docs and then gives you a link to the actual documentation page. You know, GitHub Actions, I can write now in Python, which is just so much easier than writing them in TypeScript and stuff. So, you know, just little things like that. [00:31:40]Swyx: I think that's an approach which more developers took to publish some of their work along the way. You described the third arm of FastAI as research. It's not something I see often. Obviously, you do do some research. And how do you run your research? What are your research interests? [00:31:59]Jeremy: Yeah, so research is what I spend the vast majority of my time on. And the artifacts that come out of that are largely software and courses. You know, so to me, the main artifact shouldn't be papers because papers are things read by a small exclusive group of people. You know, to me, the main artifacts should be like something teaching people, here's how to use this insight and here's software you can use that builds it in. So I think I've only ever done three first-person papers in my life, you know, and none of those are ones I wanted to do. You know, they were all ones that, like, so one was ULM Fit, where Sebastian Ruder reached out to me after seeing the course and said, like, you have to publish this as a paper, you know. And he said, I'll write it. He said, I want to write it because if I do, I can put it on my PhD and that would be great. And it's like, okay, well, I want to help you with your PhD. And that sounds great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the Fast.ai library paper, which again, somebody reached out and said, please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't, other than that, I've never written a first author paper. So the research is like, well, so for example, you know, Dawn Bench was a competition, which Stanford ran a few years ago. It was kind of the first big competition of like, who can train neural nets the fastest rather than the most accurate. And specifically it was who can train ImageNet the fastest. And again, this was like one of these things where it was created by necessity. So Google had just released their TPUs. And so I heard from my friends at Google that they had put together this big team to smash Dawn Bench so that they could prove to people that they had to use Google Cloud and use their TPUs and show how good their TPUs were. And we kind of thought, oh s**t, this would be a disaster if they do that, because then everybody's going to be like, oh, deep learning is not accessible. [00:34:20]Swyx: You know, to actually be good at it, [00:34:21]Jeremy: you have to be Google and you have to use special silicon. And so, you know, we only found out about this 10 days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next 10 days and just tried to crunch through and try to use all of our best ideas that had come from our research. And so particularly progressive resizing, just basically train mainly on small things, train on non-square things, you know, stuff like that. And so, yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, oh s**t, you know, this is going to show that you have to be Google and have TPUs to being like, oh my God, even the little guy can do deep learning. So that's an example of the kind of like research artifacts we do. And yeah, so all of my research is always, how do we do more with less, you know? So how do we get better results with less data, with less compute, with less complexity, with less education, you know, stuff like that. So ULM fits obviously a good example of that. [00:35:37]Swyx: And most recently you published, can LLMs learn from a single example? Maybe could you tell the story a little bit behind that? And maybe that goes a little bit too far into the learning of very low resource, the literature. [00:35:52]Jeremy: Yeah, yeah. So me and my friend, Jono Whittaker, basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is, can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within nine hours. And Kaggle's very, very limited. So you've only got 14 gig RAM, only two CPUs, and a small, very old GPU. So this is cool, you know, if you can do well at this, then this is a good example of like, oh, you can do more with less. So yeah, Jono and I were playing around with fine tuning, of course, transfer learning, pre-trained language models. And we saw this, like, so we always, you know, plot our losses as we go. So here's another thing we created. Actually, Sylvain Guuger, when he worked with us, created called fast progress, which is kind of like TQEDM, but we think a lot better. So we look at our fast progress curves, and they kind of go down, down, down, down, down, down, down, a little bit, little bit, little bit. And then suddenly go clunk, and they drop. And then down, down, down, down, down a little bit, and then suddenly clunk, they drop. We're like, what the hell? These clunks are occurring at the end of each epoch. So normally in deep learning, this would be, this is, you know, I've seen this before. It's always been a bug. It's always turned out that like, oh, we accidentally forgot to turn on eval mode during the validation set. So I was actually learning then, or, oh, we accidentally were calculating moving average statistics throughout the epoch. So, you know, so it's recently moving average or whatever. And so we were using Hugging Face Trainer. So, you know, I did not give my friends at Hugging Face the benefit of the doubt. I thought, oh, they've fucked up Hugging Face Trainer, you know, idiots. Well, you'll use the Fast AI Trainer instead. So we switched over to Learner. We still saw the clunks and, you know, that's, yeah, it shouldn't really happen because semantically speaking in the epoch, isn't like, it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one. So there shouldn't be a clunk, you know. So I kind of asked around on the open source discords. That's like, what's going on here? And everybody was just like, oh, that's just what, that's just what these training curves look like. Those all look like that. Don't worry about it. And I was like, oh, are you all using Trainer? Yes. Oh, well, there must be some bug with Trainer. And I was like, well, we also saw it in Learner [00:38:42]Swyx: and somebody else is like, [00:38:42]Jeremy: no, we've got our own Trainer. We get it as well. They're just like, don't worry about it. It's just something we see. It's just normal. [00:38:48]Swyx: I can't do that. [00:38:49]Jeremy: I can't just be like, here's something that's like in the previous 30 years of neural networks, nobody ever saw it. And now suddenly we see it. [00:38:57]Swyx: So don't worry about it. [00:38:59]Jeremy: I just, I have to know why. [00:39:01]Swyx: Can I clarify? This is, was everyone that you're talking to, were they all seeing it for the same dataset or in different datasets? [00:39:08]Jeremy: Different datasets, different Trainers. They're just like, no, this is just, this is just what it looks like when you fine tune language models. Don't worry about it. You know, I hadn't seen it before, but I'd been kind of like, as I say, I, you know, I kept working on them for a couple of years after ULM fit. And then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of Lama fine tuning world, where else these guys had been, you know, doing it since day one. [00:39:49]Swyx: It was only a few months ago, [00:39:51]Jeremy: but it's still quite a bit of time. So, so yeah, they're just like, no, this is all what we see. [00:39:56]Swyx: Don't worry about it. [00:39:56]Jeremy: So yeah, I, I've got a very kind of like, I don't know, I've just got this brain where I have to know why things are. And so I kind of, I ask people like, well, why, why do you think it's happening? And they'd be like, oh, it would pretty obviously, cause it's like memorize the data set. It's just like, that can't be right. It's only seen it once. Like, look at this, the loss has dropped by 0.3, 0.3, which is like, basically it knows the answer. And like, no, no, it's just, it is, it's just memorize the data set. So yeah. So look, Jono and I did not discover this and Jono and I did not come up with a hypothesis. You know, I guess we were just the ones, I guess, who had been around for long enough to recognize that like, this, this isn't how it's meant to work. And so we, we, you know, and so we went back and like, okay, let's just run some experiments, you know, cause nobody seems to have actually published anything about this. [00:40:51]Well, not quite true.Some people had published things, but nobody ever actually stepped back and said like, what the hell, you know, how can this be possible? Is it possible? Is this what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time. It's like, okay, if this hypothesis is correct, that it's memorized in the training set, then we ought to see blah, under conditions, blah, but not under these conditions. And so we ran a bunch of experiments and all of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big data set, you know, which in hindsight, it's not totally surprising because the theory, remember, of the ULMFiT theory was like, well, it's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize it as like, oh, that's just a version of this. So it's not so crazy, you know, but it is, it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because like, it doesn't even matter. Like maybe it's fine. Like maybe it's fine that it's memorized the data set after one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. [00:42:20]Swyx: That's fine. [00:42:22]Jeremy: Don't, you know, don't, I keep telling people, don't track validation loss, track validation accuracy because at least that will still be useful. Just another thing that's got lost since ULMFiT, nobody tracks accuracy of language models anymore. But you know, it'll still keep learning and it does, it does keep improving. But is it worse? You know, like, is it like, now that it's kind of memorized it, it's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do, like nobody really knows whether this memorization thing is, it's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably, yeah, I have a feeling it's messing up training dynamics as well. [00:43:13]Swyx: And does it come at the cost of catastrophic forgetting as well, right? Like, which is the other side of the coin. [00:43:18]Jeremy: It does to some extent, like we know it does, like look at Code Llama, for example. So Code Llama was a, I think it was like a 500 billion token fine tuning of Llama 2 using code. And also pros about code that Meta did. And honestly, they kind of blew it because Code Llama is good at coding, but it's bad at everything else, you know, and it used to be good. Yeah, I was pretty sure it was like, before they released it, me and lots of people in the open source discords were like, oh my God, you know, we know this is coming, Jan Lukinsk saying it's coming. I hope they kept at least like 50% non-code data because otherwise it's going to forget everything else. And they didn't, only like 0.3% of their epochs were non-code data. So it did, it forgot everything else. So now it's good at code and it's bad at everything else. So we definitely have catastrophic forgetting. It's fixable, just somebody has to do, you know, somebody has to spend their time training a model on a good mix of data. Like, so, okay, so here's the thing. Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it. [00:44:36]Jeremy: And that's because people are using it in a way different to why I created it. You know, I created it thinking the task-specific models would be more specific. You know, it's like, oh, this is like a sentiment classifier as an example of a task, you know, but the tasks now are like a, you know, RLHF, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task and it's a really cool approach. And so we see, for example, RLHF also breaks models like, you know, like GPT-4, RLHDEFT, we know from kind of the work that Microsoft did, you know, the pre, the earlier, less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me, the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter, you stop training on low quality data, because that's probably fine to forget how to write badly, maybe. So yeah, that's now my view, is I think ULM fit is the wrong approach. And that's why we're seeing a lot of these, you know, so-called alignment tacks and this view of like, oh, a model can't both code and do other things. And, you know, I think it's actually because people are training them wrong. [00:46:47]Swyx: Yeah, well, I think you have a clear [00:46:51]Alessio: anti-laziness approach. I think other people are not as good hearted, you know, they're like, [00:46:57]Swyx: hey, they told me this thing works. [00:46:59]Alessio: And if I release a model this way, people will appreciate it, I'll get promoted and I'll kind of make more money. [00:47:06]Jeremy: Yeah, and it's not just money. It's like, this is how citations work most badly, you know, so if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot and few shot learning, everybody was writing about that. Or, you know, with image generation, everybody just was writing about GANs, you know, and I was trying to say like, no, GANs are not the right approach. You know, and I showed again through research that we demonstrated in our videos that you can do better than GANs, much faster and with much less data. And nobody cared because again, like if you want to get published, you write a GAN paper that slightly improves this part of GANs and this tiny field, you'll get published, you know. So it's, yeah, it's not set up for real innovation. It's, you know, again, it's really helpful for me, you know, I have my own research lab with nobody telling me what to do and I don't even publish. So it doesn't matter if I get citations. And so I just write what I think actually matters. I wish there was, and, you know, and actually places like OpenAI, you know, the researchers there can do that as well. It's a shame, you know, I wish there was more academic, open venues in which people can focus on like genuine innovation. [00:48:38]Swyx: Twitter, which is unironically has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discords. I don't know if it's too, I don't know if it's a pusher to ask like what discords are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of Luther AI, which is a very hard bit. And, you know, like what is the new Luther? And you actually shouted out the alignment lab AI discord in your blog post. And that was the first time I even knew, like I saw them on Twitter, never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay, yeah. [00:49:23]Jeremy: And then even then, if you do know about that and you go there, it'll look like it's totally dead. And that's because unfortunately, nearly all the discords, nearly all of the conversation happens in private channels. You know, and that's, I guess. [00:49:35]Swyx: How does someone get into that world? Because it's obviously very, very instructive, right? [00:49:42]Jeremy: You could just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so like, at least, to be fair, one of Emma's bustling channels is private. [00:49:57]Swyx: I guess. [00:49:59]Jeremy: So I'm just thinking. [00:50:01]Swyx: It's just the nature of quality discussion, right? Yeah, I guess when I think about it, [00:50:05]Jeremy: I didn't have any private discussions on our discord for years, but there was a lot of people who came in with like, oh, I just had this amazing idea for AGI. If you just thought about like, if you imagine that AI is a brain, then we, you know, this just, I don't want to talk about it. You know, I don't want to like, you don't want to be dismissive or whatever. And it's like, oh, well, that's an interesting comment, but maybe you should like, try training some models first to see if that aligns with your intuition. Like, oh, but how could I possibly learn? It's like, well, we have a course, just actually spend time learning. Like, you know, anyway. And there's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit, that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join a Discord and you can hit the like participants or community or whatever button, you can see who's in it. And then you'll see at the top, who the admins or moderators or people in the dev role are. And just DM one of them and say like, oh, here's my GitHub. Well, here's some blog posts I wrote. You know, I'm interested in talking about this, you know, can I join the private channels? And I've never heard of anybody saying no. I will say, you know, Alutha's all pretty open. So you can do the Alutha Discord still. You know, one problem with the Alutha Discord is it's been going on for so long that it's like, it's very inside baseball. It's quite hard to get started. Yeah. Carpa AI looks, I think it's all open. That's just less stability. That's more accessible. [00:52:03]Swyx: Yeah. [00:52:04]Jeremy: There's also just recently, now it's research that does like the Hermes models and data set just opened. They've got some private channels, but it's pretty open, I think. You mentioned Alignment Lab, that one it's all the interesting stuff is on private channels. So just ask. If you know me, ask me, cause I've got admin on that one. There's also, yeah, OS Skunkworks, OS Skunkworks AI is a good Discord, which I think it's open. So yeah, they're all pretty good. [00:52:40]Swyx: I don't want you to leak any, you know, Discords that don't want any publicity, but this is all helpful. [00:52:46]Jeremy: We all want people, like we all want people. [00:52:49]Swyx: We just want people who like, [00:52:51]Jeremy: want to build stuff, rather than people who, and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like, here's a really small kind of task that as somebody who doesn't know anything is going to take you a really long time to do, but it would still be helpful. Then, and then you go and do it. That would be great. The truth is, yeah, [00:53:19]Swyx: like, I don't know, [00:53:20]Jeremy: maybe 5% of people who come in with great enthusiasm and saying that they want to learn and they'll do anything. [00:53:25]Swyx: And then somebody says like, [00:53:25]Jeremy: okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity. And everybody will then want to help you do more work. [00:53:41]Swyx: So yeah. [00:53:41]Jeremy: So just, yeah, just do work and people will want to support you. [00:53:47]Alessio: Our Discord used to be referral only for a long time. We didn't have a public invite and then we opened it and they're kind of like channel gating. Yeah. A lot of people just want to do, I remember it used to be like, you know, a forum moderator. [00:54:00]Swyx: It's like people just want to do [00:54:01]Alessio: like drive-by posting, [00:54:03]Swyx: you know, and like, [00:54:03]Alessio: they don't want to help the community. They just want to get their question answered. [00:54:07]Jeremy: I mean, the funny thing is our forum community does not have any of that garbage. You know, there's something specific about the low latency thing where people like expect an instant answer. And yeah, we're all somehow in a forum thread where they know it's like there forever. People are a bit more thoughtful, but then the forums are less active than they used to be because Discord has got more popular, you know? So it's all a bit of a compromise, you know, running a healthy community is, yeah, it's always a bit of a challenge. All right, we got so many more things [00:54:47]Alessio: we want to dive in, but I don't want to keep you here for hours. [00:54:50]Swyx: This is not the Lex Friedman podcast [00:54:52]Alessio: we always like to say. One topic I would love to maybe chat a bit about is Mojo, modular, you know, CrystalLiner, not many of you on the podcast. So we want to spend a little time there. You recently did a hacker's guide to language models and you ran through everything from quantized model to like smaller models, larger models, and all of that. But obviously modular is taking its own approach. Yeah, what got you excited? I know you and Chris have been talking about this for like years and a lot of the ideas you had, so. [00:55:23]Jeremy: Yeah, yeah, yeah, yeah, no, absolutely. So I met Chris, I think it was at the first TensorFlow Dev Summit. And I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know, you know, certainly nothing had been mentioned. So I, you know, I admired him from afar with LLVM and Swift and whatever. And so I saw him walk into the courtyard at Google. It's just like, oh s**t, man, that's Chris Latner. I wonder if he would lower his standards enough to talk to me. Well, worth a try. So I caught up my courage because like nobody was talking to him. He looked a bit lost and I wandered over and it's like, oh, you're Chris Latner, right? It's like, what are you doing here? What are you doing here? And I was like, yeah, yeah, yeah. It's like, oh, I'm Jeremy Howard. It's like, oh, do you do some of this AI stuff? And I was like, yeah, yeah, I like this AI stuff. Are you doing AI stuff? It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool. And it's like, wow. So like, I spent the next half hour just basically brain dumping all the ways in which AI was stupid to him. And he listened patiently. And I thought he probably wasn't even remember or care or whatever. But yeah, then I kind of like, I guess I re-caught up with him a few months later. And it's like, I've been thinking about everything you said in that conversation. And he like narrated back his response to every part of it, projects he was planning to do. And it's just like, oh, this dude follows up. Holy s**t. And I was like, wow, okay. And he was like, yeah, so we're going to create this new thing called Swift for TensorFlow. And it's going to be like, it's going to be a compiler with auto differentiation built in. And blah, blah, blah. And I was like, why would that help? [00:57:10]Swyx: You know, why would you? [00:57:10]Jeremy: And he was like, okay, with a compiler during the forward pass, you don't have to worry about saving context, you know, because a lot will be optimized in the backward. But I was like, oh my God. Because I didn't really know much about compilers. You know, I spent enough to kind of like, understand the ideas, but it hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. I was like, wow, that's amazing. Okay, you do know, right, that nobody's going to use this unless it's like usable. It's like, yeah, I know, right. So I was thinking you should create like a fast AI for this. So, okay, but I don't even know Swift. And he was like, well, why don't you start learning it? And if you have any questions, ask me. It's just like, holy s**t. Like, not only has Chris Latner lowered his standards enough to talk to me, but he's offering me personal tutoring on the programming language that he made. So I was just like, I'm not g

GEM Podcast
MONEY AND MASCULINITY, FT. LUKE LINTZ, WITH GAV KWOK – GEM EP.20

GEM Podcast

Play Episode Listen Later Oct 17, 2023 93:55


In this podcast episode I sit down with one of my good friends Luke Lintz. Luke is the founder of HighKey Enterprises, and has made millions in his early 20's. Luke shares his story of how he came up, money, masculinity. and what it takes to be successful in this new generation and decade.  We also talk about E-commerce, success, failures and a whole lot more.. Connect with Luke on Instagram: @lukelintz If you got any value from this subscribe to the podcast and leave a 5 star review. Thank you for tuning in! If you would like to ask me any question or a topic you would like me to cover on the podcast message me on Instagram @gav.kwok

Poured Over
Jean Kwok on THE LEFTOVER WOMAN

Poured Over

Play Episode Listen Later Oct 12, 2023 42:43


“I want people to read it with joy, just for the story. But I do hope that they'll pick up something else along the way about … deeper things.” The Leftover Woman by Jean Kwok finds two women on incredibly different paths in life in collision as they grapple with issues of culture, class and motherhood. Kwok joins us to talk about how she started a career in writing, the importance and language and cultural identity, how books open doors to learning and growth and more with Miwa Messer, host of Poured Over. We end this episode with TBR Topoff book recommendations from Madyson and Mary.    This episode of Poured Over was hosted by Executive Producer Miwa Messer and mixed by Harry Liang.      New episodes land Tuesdays and Thursdays (with occasional Saturdays) here and on your favorite podcast app.       Featured Books (Episode):  The Leftover Woman by Jean Kwok  Girl in Translation by Jean Kwok  Searching for Sylvie Lee by Jean Kwok  Mambo in Chinatown by Jean Kwok  Happiness Falls by Angie Kim  The Puzzle Master by Danielle Trussoni  Broadway Butterfly by Sara DiVello    Featured Books (TBR Topoff):  Greek Lessons by Han Kang  The Joy Luck Club by Amy Tan 

What's New in Adapted Physical Education
Para Report Cards on Physical Activity and Health Around the Globe: A Conversation with Dr. Kwok Ng

What's New in Adapted Physical Education

Play Episode Listen Later Sep 26, 2023 50:03


In this podcast, we had an excellent conversation with international APA scholar Dr. Kwok Ng (@kwokwng) about the newly formed Para Report Cards that graded 14 countries, which include the US and Canada, on an assortment of physical activity and health indicators. Dr. Ng is at the University of Limerick, the University of Turku, and the University of Eastern Finland. His research is largely interdisciplinary and focuses on the health promotion of children, especially those with disabilities. Within this podcast we discuss the need for the Para Report Cards, their development, and some of the main findings from the Para Report Cards. In addition, the discussion also touched on the potential impact of these report cards on policies and initiatives aimed at improving physical activity for children and adolescents with disabilities.

Superwomen with Rebecca Minkoff
A Match Made in Suncare Heaven: The Birth of Suncare for Everyone with Co-founders Emily Doyle and Mei Kwok of Dune Suncare

Superwomen with Rebecca Minkoff

Play Episode Listen Later Sep 5, 2023 45:13


Enjoy this video podcast on Spotify and Youtube! What happens when two event production directors, Emily Doyle and Mei Kwok, come together in a dreamy co-founder marriage? They give birth to an incredible baby, of course! That baby is Dune Suncare, the first-ever “clear gel suncare line packed with clinically proven skincare benefits.”  It all began back in March 2020, at the beginning of Covid. Without much else going on, Emily and Mei knew their dream business could not wait. The decision to launch a suncare line didn't take much deliberation. They agreed from the start that their brand had to speak to everyone and be accessible.  Cut to three and a half years later and Emily and Mei's baby is a massive success. You can find Dune Suncare products everywhere from boutiques and hotels to big-name retail stores like Ulta and online retailer Amazon, making them accessible for everyone.  What is the secret to their success? Taking time to nurture their own relationship as well as the relationships with everyone who helped launch their dream baby. Thanks for listening!  Don't forget to order Rebecca's new book, Fearless: The New Rules for Unlocking Creativity, Courage, and Success. Follow Superwomen on Instagram. Social Media: ⁠@dunesuncare⁠ Big Ideas:  What it takes to launch a suncare line from zero How to have a successful co-founder relationship The challenges of raising capital as a woman --- Support this podcast: https://podcasters.spotify.com/pod/show/superwomen/support

FLOSS Weekly (MP3)
FLOSS Weekly 743: Data Is Surprisingly Exciting - Apache SeaTunnel, William Kwok

FLOSS Weekly (MP3)

Play Episode Listen Later Aug 2, 2023 65:18


William Kwok speaks with Doc Searls and Shawn Powers about Apache SeaTunnel, an exciting and extremely useful open-source way to synchronize multiple databases. Hosts: Doc Searls and Shawn Powers Guest: William Kwok Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit

The Glossy Beauty Podcast
Dune Suncare founders Emily Doyle and Mei Kwok: ‘We want to speak to as wide an audience as possible'

The Glossy Beauty Podcast

Play Episode Listen Later Jul 20, 2023 51:16


In a timely Glossy Beauty Podcast episode for the height of summer, this week features a category that has blown up in beauty: sunscreen. Long gone are the days when options were limited to a handful of brands like Coppertone and Hawaiian Tropic. In recent years, a wide range of chic new sunscreen labels have been hitting the market, while skin-care brands are churning out new SPF product launches. One of these hip new brands is one-year-old Dune Suncare, which uses a colorful, nostalgic aesthetic to appeal to both men and women across all age groups. This week's episode features the brand's co-founders, Emily Doyle, an event production and marketing pro, and Mei Kwok, who also produces events and performs as a highly sought-after DJ. The founders have created a cool factor for the brand by working with luxury hotels, including QR codes to Kwok's playlists in its packaging, and shooting campaigns with top fashion photographers. But its distribution plan is all about accessibility as they focus on scaling through wholesale partners including Amazon and Ulta Beauty.

China Unscripted
#211 Hong Kong is a Powder Keg | Anna Kwok

China Unscripted

Play Episode Listen Later Jul 17, 2023 44:07


Despite the facade of calm, Hong Kong is a powder keg underneath. Many Hongkongers don't agree with the Chinese Communist Party's takeover of Hong Kong, but also feel powerless to change it. In this episode of China Unscripted, we discuss the bounty Hong Kong has on 8 democracy activists (including our guest), what Hongkongers can do to keep the protest movement alive, and the Biden administration's relationship with China. Joining us in this episode is Anna Kwok, the executive director of the Hong Kong Democracy Council.

fiction/non/fiction
S6 Ep. 39: The Kids Are at Work: Jean Kwok On Recent Efforts to Loosen Child Labor Laws and Her Years as a Child Worker in New York

fiction/non/fiction

Play Episode Listen Later Jun 29, 2023 41:40


Novelist Jean Kwok joins co-hosts V.V. Ganeshananthan and Whitney Terrell to discuss recent changes to child labor laws in the U.S., as more than 10 states have proposed or enacted legislation that would loosen restrictions on minors working. The three talk about what the shift means in relation to labor shortages and consider migrant children's unique vulnerability to exploitation. Kwok describes working in a New York factory from kindergarten through high school and how that experience continues to affect her life. She also reads from her novel Girl in Translation, which is based on her years as a child worker. To hear the full episode, subscribe through iTunes, Google Play, Stitcher, Spotify, or your favorite podcast app (include the forward slashes when searching). You can also listen by streaming from the player below. Check out video versions of our interviews on the Fiction/Non/Fiction Instagram account, the Fiction/Non/Fiction YouTube Channel, and our show website: https://www.fnfpodcast.net/ This episode of the podcast was produced by Anne Kniggendorf. Jean Kwok The Leftover Woman Searching for Sylvie Lee Mambo in Chinatown Girl in Translation Others: Adrian Dickey ‘Dumb and dangerous': US sees surge in efforts to weaken child labor regulations, by Michael Sainato, The Guardian “Iowa Governor Signs Law to Loosen Child Labor Regulations” by Katarina Sostaric, Iowa Public Radio “Iowa Senate Republicans Pass Bill to Relax Some Child Labor Laws” by Katarina Sostaric, Iowa Public Radio ‘It's just crazy': Republicans attack US child labor laws as violations rise” by Michael Sainato, The Guardian “Alone and Exploited, Migrant Children Work Brutal Jobs Across the U.S.” by Hannah Dreier, The New York Times. “Republicans and Democrats have different top priorities for U.S. immigration policy” by J. Baxter Oliphant and Andy Cerda, Pew Research Center “House G.O.P., Divided Over Immigration, Advances Border Crackdown Plan,” by Karoun Demirjian, The New York Times “Jean Kwok, Author of Girl in Translation” by Jen Chung, The Gothamist “Children as young as 12 work legally on farms, despite years of efforts to change law” by Andrea Hsu, NPR ‘We give our blood so they live comfortably': Sri Lanka's tea pickers say they go hungry and live in squalor,” by Jeevan Ravindran, The Guardian “Meet the urban sharecroppers,” by Tanis Taylor, The Guardian China's One-Child Policy - The New York Times Learn more about your ad choices. Visit megaphone.fm/adchoices

Glam & Grow - Fashion, Beauty, and Lifestyle Brand Interviews
Skincare meets Suncare with Co-Founders Emily Doyle and Mei Kwok of Dune Suncare

Glam & Grow - Fashion, Beauty, and Lifestyle Brand Interviews

Play Episode Listen Later Jun 5, 2023 42:35


Whether you're a sun worshiper or a shade seeker, this episode is for you. Dune is the brand behind the Revolutionary Invisible SPF Gel clinically backed with loads of skincare benefits. Their products feature cutting-edge formulas that are gentle enough for all skin types, and they use eco-friendly packaging materials to minimize their impact on the environment. Dune Suncare is committed to transparency, and they prioritize ingredient safety and sustainability to ensure that their products are both effective and conscious. Oh and the best part is, you'll never deal with white-cast, ever again. Emily and Mei also share:How a pandemic and losing their jobs ultimately pushed them to pursue their passionsThe complexities behind the formulationsHow they created a brand that is for everyone, all genders and agesProblems the suncare industry is currently facingWhy sunscreen is the #1 beauty product on the marketYou'll also hear Emily and Meii's biggest insights and what's next for Dune.We hope you enjoy this episode and gain valuable insights into their respective journeys and the growth of her brand. Don't forget to like and subscribe to the Glam & Grow podcast for more exciting perspectives.Be sure to check out Dune Suncare at www.dunesuncare.com and on Instagram @dunesuncareThis episode is sponsored by Shopify.Shopify POS is your command center for your retail store. From accepting payments to managing inventory, Shopify has EVERYTHING you need to sell in-person. Sign up for a one-dollar-per-month trial period at www.shopify.com/glamThis episode is brought to you by WavebreakLeading direct-to-consumer brands hire Wavebreak to turn email marketing into a top revenue driver.Most eCommerce brands don't email right... and it costs them. At Wavebreak, our eCommerce email marketing agency helps qualified stores recapture 6-7 figures of lost revenue each year.From abandoned cart emails to Black Friday campaigns, our best-in-class team of email specialists manage the entire process: strategy, design, copywriting, coding, and testing. All aimed at driving growth, profit, brand recognition, and most importantly, ROI.Curious if Wavebreak is right for you? Reach out at Wavebreak.co

Opening Arguments
OA723: Right-Wing Judges Take the Money and Run… Away With Your Civil Rights

Opening Arguments

Play Episode Listen Later Apr 11, 2023 52:59


Liz and Andrew tackle three stories: an update on the dueling mifepristone rulings in Texas and Washington, Clarence Thomas's latest corrupt activities, and (for patrons) an update on Steve Bannon's sugar daddy. Notes OA 594: Impeach Clarence Thomas https://openargs.com/oa594-impeach-clarence-thomas/  OA 714 on Kwok https://openargs.com/oa714-gonna-be-hard-for-steve-bannons-sugar-daddy-to-write-those-checks-from-prison/ Kwok superseding indictment https://storage.courtlistener.com/recap/gov.uscourts.nysd.595325/gov.uscourts.nysd.595325.19.0_1.pdf Ethics in Government Act of 1978, 5a U.S.C. §§ 101 et seq. https://www.law.cornell.edu/uscode/text/5a/compiledact-95-521/title-I Washington v. FDA Motion for Clarification https://storage.courtlistener.com/recap/gov.uscourts.waed.102225/gov.uscourts.waed.102225.81.0_3.pdf Washington v. FDA Motion to Expedite https://storage.courtlistener.com/recap/gov.uscourts.waed.102225/gov.uscourts.waed.102225.82.0.pdf FDA 5th Circuit Motion to Stay https://storage.courtlistener.com/recap/gov.uscourts.ca5.213145/gov.uscourts.ca5.213145.20.0_4.pdf?utm_source=substack&utm_medium=email Intervenors 5th Circuit Motion to Stay https://storage.courtlistener.com/recap/gov.uscourts.ca5.213145/gov.uscourts.ca5.213145.22.1_1.pdf Robert EOY report on the Supreme Court 2021 https://www.supremecourt.gov/publicinfo/year-end/2021year-endreport.pdf NPR on 2022 state supreme court races https://www.npr.org/2022/11/05/1134514218/money-is-pouring-into-state-judicial-campaigns-this-year  ProPublica Thomas story https://www.propublica.org/article/clarence-thomas-scotus-undisclosed-luxury-travel-gifts-crow Wall Street Journal on federal judges violating the ethics laws https://www.wsj.com/articles/131-federal-judges-broke-the-law-by-hearing-cases-where-they-had-a-financial-interest-11632834421 -Support us on Patreon at: patreon.com/law -Follow us on Twitter:  @Openargs -Facebook:  https://www.facebook.com/openargs/ -For show-related questions, check out the Opening Arguments Wiki, which now has its own Twitter feed!  @oawiki -And finally, remember that you can email us at openarguments@gmail.com