POPULARITY
In this powerful episode of the We Heart Therapy Youtube Channel and podcast, on our series EFT Talk, host Dr. Belle (Anabelle Bugatti aka Dr. Belle, PhD, LMFT) is joined by certified AZ EFT Trainer Rachel Thomas, LMFT, to explore Attachment-Based Suicide Assessment within the framework of Emotionally Focused Therapy (EFT).
Send us a textWelcome to the latest episode of Living Proof, our podcast produced in collaboration with Plus.maths.orgIn this episode, we dive into the importance of communicating mathematics to a broader audience, a growing priority within the maths community. Sara Khan, Communications Manager at the Isaac Newton Institute, shares how the INI is championing mathematics communication. Then, Rachel Thomas and Marianne Freiberger, editors of Plus.maths.org, revisit their conversation with Hannah Fry, who has recently taken on the role of Professor of the Public Understanding of Mathematics at the University of Cambridge. To learn more about the organizations and events mentioned in this episode that support mathematics communication, check out the following links:The Mathsci-comm network, funded by an INI Network grant, connects individuals working in or with an interest in communicating complex mathematics and data science to non-expert audiences. The network is managed by Plus.maths.org editors Marianne Freiberger and Rachel Thomas, alongside Maha Kaouri from the Newton Gateway to Mathematics.The Communicating Mathematical and Data Sciences — What Does Success Look Like? workshop, organized by the Mathsci-comm network, was held at the INI in November 2024. It was at this event that Hannah Fry announced her move to Cambridge.The Graduate Training Workshop for the Mathematical Sciences, hosted by the Newton Gateway to Mathematics, took place at the INI in February 2025, with a key focus on communication. This session was led by Plus.maths.org editors Marianne Freiberger and Rachel Thomas, along with Alison Kiddle and Katie Steckles, and followed a pilot event in October 2024.The Talking Maths in Public (TMiP) conference, held biennially in the UK, brings together those who work in or contribute to communicating mathematics to the public. TMiP 2025 will be held at the University of Warwick from 28th to 30th August 2025, with an option to attend online.
We kick off our latest series of podcasts with an episode of Living proof, produced jointly with the Isaac Newton Institute for Mathematical Sciences (INI). This episode is all about the communication of mathematics to the wider world, which is becoming ever more recognised as a priority within the maths community. We talk to Sara Khan, Communications Manager at the INI, about how this renowned research institute supports mathematics communication. And we revisit our interview with Hannah Fry who has just taken up her new role as Professor of the Public Understanding of Mathematics here at the University of Cambridge As Hannah puts it, "It's really important that people feel that [mathematics] is being done with them, not to them." We also find out about Hannah's own research in her previous role as Professor for the Mathematics of Cities at University College London, and hear about her favourite mathematical moment. To find out more about organisations and events in support of mathematics communication mentioned in this episode, see the following links: The Mathsci-comm network is funded by an INI Network grant and aims to connect those working in, and with a stake in, communicating complex mathematics and data science to a variety of non-expert audiences. The network is run by the Editors of plus.maths.org, Marianne Freiberger and Rachel Thomas, together with Maha Kaouri from the Newton Gateway to Mathematics Communicating mathematical and data sciences — what does success look like? was a workshop organised by the Mathsci-comm network, which took place at the INI in November 2024. Hannah Fry announced her move to Cambridge at this event. The Graduate training workshop for the Mathematical Sciences, organised by the Newton Gateway to Mathematics, took place at the INI in February 2025 and comprised a significant component dedicated to communication, delivered by the Editors of plus.maths.org, Marianne Freiberger and Rachel Thomas, together with
Welcome back for the next journey of The Family Express Podcast with Kathryn de Bruin, LMFT and Ronda Evans, LMFT where our destination is resilient and connected families. Our guest speaker is Rachel Thomas. All Aboard !Thank you for listening! Kathryn de Bruin is an ICEEFT Certified EFT Trainer. Kathryn and Ronda are both licensed marriage and family therapists, EFT supervisors and therapists, and AAMFT Approved Supervisors.You can follow Kathryn de BruinFacebook YouTube IG Yelp Google + Twitter WebsiteYou can follow Ronda EvansFacebook Facebook IG LinkedIn Website You can connect with Rachel here:https://therapywithheart.com/
Gender equity suddenly cuts both ways in 2024: Some argue that men are being unfairly demonized, while data shows a continuing gap in opportunity for women. Live from the Masters of Scale Summit in San Francisco, Jessi Hempel, host of the Hello Monday podcast, leads a thought-provoking panel with the CEO of Lean In, Rachel Thomas; the chief diversity officer at Meta, Maxine Williams; and the founder of Girls Who Code and Moms First, Reshma Saujani. They get real on meritocracy in the workplace and who gets ahead, whether it's truly harder to be a white man in 2024, and more, offering a sobering reality check on corporate diversity programs.Visit the Rapid Response website here: https://www.rapidresponseshow.com/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Into The MysticEpisode 12: Rachel Thomas-Medwid (Fenwick)Jason Connell has Rachel Thomas-Medwid, filmmaker of "Fenwick" and Official Selection of the Mystic Film Festival in 2024, on the show.Recorded: 08-28-24Studio: Just Curious MediaPartner: Mystic Film FestivalListen:BuzzsproutApple PodcastsSpotifyGoogle PodcastsAmazon MusiciHeartRadioTuneInWatch:YouTubeHost:Jason Connell#justcuriousmedia #mysticfilmfestvial #intothemystic #talesfromhollywoodeast #mrjasonconnell #cinema #classicmovies #movies #moviereviews #film #filmreviews #filmfestival #studios #producers #directors #writers #actors #fenwickSend us a textSupport the show
Dr. Sandie Morgan is joined by Rachel Thomas as the two discuss the importance of role models and mentors for vulnerable youth. Rachel Thomas Rachel Thomas is a survivor, advocate, and educator. She is serving her second term on the White House Advisory Council, co-founded Sowers Education Group, and speaks all over the country. Rachel Thomas will be the Amplify 2024 Keynote speaker to support the work of the Global Center. She has previously been a guest on the Ending Human Trafficking Podcast on episode #196: Ending The Game and episode #272: The Cool Aunt Series. Main Points Role models and mentors have a significant impact on youth, particularly black youth and those in the foster care system. They are crucial in providing guidance, stability, and positive examples that many youth may lack. Hip hop and rap culture have a large influence on youth, especially in terms of role modeling and aspirations. Many youth, particularly those in vulnerable situations like foster care, look up to hip hop artists who may embody success and empowerment in ways that resonate with them, although there are potential pitfalls of hypersexualization and dysfunctional themes in the genre. When it comes to mentoring youth, challenges may arise surrounding the idolized figures in hop hop culture, however, it is important to have conversations around these influences without dismissing the artists or their influences. As a mentor, it is important to build rapport, understand the youth's perspectives, and gradually introduce alternative ways of thinking and aspirations. It is important that adults get involved in mentoring programs as just one committed mentor can make a significant difference in a young person's life. Resources #196: Ending the Game #272: The Cool Aunt Sower's Education Group Coaching for Leaders Transcript Sandra Morgan 0:14 You're listening to the Ending Human Trafficking Podcast. This is episode #324: Role Models and Mentors, with Rachel Thomas. Welcome to the Ending Human Trafficking Podcast here at Vanguard University's Global Center for Women and Justice in Orange County, California. My name is Dr. Sandie Morgan and this is the show where we empower you to study the issues, be a voice, and make a difference in ending human trafficking. I'm so happy to welcome back our good friend, Rachel Thomas. Rachel Thomas 0:58 Hi Dr. Morgan, thank you so much for having me back. This is an honor and a pleasure, always. Sandra Morgan 1:03 I just love having conversations with you, Rachel, I learn so much. You're an amazing survivor, advocate, and educator. You're serving your second term on the White House Advisory Council, you co-founded Sowers Education Group, you speak all over the country, and in fact, I'm really excited that you're going to be our Amplify 2024 Keynote, to support the work of the Global Center. We're really excited. You've been a frequent flyer on the Ending Human Trafficking Podcast. I'd like to recommend that people go back and listen to episode #196: Ending The Game, probably one of the best discussions on psychological coercion, and your episode #272 with The Cool Aunt Series. I'm happy to have you back, Rachel. Rachel Thomas 2:08 Thank you, honored to be back, and glad that you're still doing this important podcast. This is such a great resource and service to the community. Sandra Morgan 2:17 I just love it. I got an invitation in the mail yesterday, an email, to go on a talk show in Dublin, Ireland. Rachel Thomas 2:27 Wow. Sandra Morgan 2:29 I just love how international our community is, and people care. Hopefully because of that, other people will get a chance to listen to our conversation today. We're going to talk about the theme of Models, Role Models and Mentors for Black Youth. When I think about role models, when I was a young person, I wanted to be like my teacher, I wanted to be a professor. One of the people I wanted to be like,
The Government is expected to announce a record-breaking medicines boost of at least $600 million as a way of keeping National's promise on cancer drugs. Two separate sources, including a government official, have leaked the details to The Post's health reporter, Rachel Thomas, who tells Newsable about the details.
Special guest is author L. Sydney Fisher who's here to discuss the true story of college students that suffered dearly after playing with an Ouija board. Get her book 'The Devil's Board'. Now an Amazon #1 Bestseller inspired by TRUE EVENTS. On an American college campus in 1987, three students began playing a seemingly innocent game of contacting the dead. Word spread fast around campus and curiosity grew, expanding the group to more than forty people. Spirits were summoned almost daily, and the dark world's influence began to take its toll as one student fell gravely ill and relationships began to crumble. Months later, the dead would be resurrected, and this time there would be Hell to pay. This is their story... Rachel Thomas was more than happy to leave the haunted house she had lived in for the last decade. She had every reason to be excited about her future as a college freshman entering Riverside Community College. But she had no way of knowing that she would find herself in the terrifying grips of the paranormal again when her new roommate, Josie Norton and her friends began using the Ouija Board. In spite of Rachel's reluctance to join the group's nocturnal ritual of contacting the dead, she finds herself sucked into the drama and a witness to the spirit's malevolent nature as strange phenomena begins happening. Weeks turn into months as Amber Simmons becomes obsessed with the game and her "ghost" friend who assumes the identity of a human being now in the afterlife. As the malevolent spirit continues to control and manipulate Amber, the close-knit friends are terrorized until the Ouija spirit makes one final show of force determined to kill them all! _________________________________________________________________ Note from the author: The Devil's Board is based on true events that happened in the Fall/Spring year of 1987-88. The college campus is located in small town, USA. To this day, the story of Ryan Banks still remains a haunting mystery. All names and locations have been changed to protect the privacy of the institution and the characters of the story. Some parts of The Devil's Board have been dramatized for the sake of storytelling. *We have removed our shows from Spotify. We recommend Apple Podcasts, Amazon Music, Podcast Addict, PocketCast, and other available apps.*
EPISODE 51 Today, Pow•HER•ful Retreat guest, Rachel Thomas shares her journey to JGFG and how she has been able to find herself again since joining a year ago. We also chat about her journey with macros, what Rachel has been able to overcome since joining JGFG, and her goals for 2024. This week, we challenge you to be the strongest in the room! Don't be afraid to be YOU or take up space or be "too much." Go ahead and be quirky and loud and whoever you were meant to be! It is not your responsibility to make others comfortable. You are enough and do not need to apologize for being you!This episode is for you if:You want to hear from a Pow•HER•ful Retreat attendeeYou are interested in starting your macro journeyYou want to find out what just one year of JGFG can do for you, both physically and mentallyYou are strong. You are powerful. You are worthy.Connect with Jen:On Facebook - https://www.facebook.com/jens.get.fit.group/On Instagram - https://www.instagram.com/jens.get.fit.group/Check out or join Jen's Get Fit Group - http://jensgetfitgroup.comSHOW NOTES: https://jensgetfitgroup.com/episode51
Do you have skills in your real estate or conventional business that you can share with others so that we strengthen agencies across the country? I share the lessons I've learned from my dear friends and clients Drew and Rachel Thomas while attending their Salon Owner Mastermind last week. And I'm going to be at several events across the country each month in 2024. Come join me for free as a guest. Email gary@garypinkerton.com or text me if interested. Highlights Future events in Salt Lake City that listeners can attend and learn How his client and friend are successful in the plumbing industry Takeaways from the Salon Owner Mastermind event put together by Drew and Rachel Why you should consider teaching others within their industry The importance of maintaining business ownership and gaining agency in one's life Why successful, wealthy people are not greedy, but rather grateful and abundantly minded Links and Resources from this Episode Connect with Gary Pinkerton https://www.paradigmlife.net/ gpinkerton@paradigmlife.net https://garypinkerton.com/ https://www.aspiretour.com/ Review, Subscribe and Share If you like what you hear please leave a review by clicking here Make sure you're subscribed to the podcast so you get the latest episodes. Subscribe with Apple Podcasts Follow on Audible Subscribe with Listen Notes Subscribe with RSS
In this episode, Berks & Bucks FA's Referee Training Officer, Minesh Gupta, is joined by grassroots Referees; Carol John, Imogen Hooper, Rachel Thomas and Women's Super League Match Official, Grace Lowe to discuss the importance of female Referees in the game.Together with Minesh, the group discusses the importance of female referees in the game, the female refereeing pathway and highlights the support available to help people progress. Our guests also provide fascinating insights into their journeys in the game, including individual challenges and highlights and sharing top tips for up-and-coming Referees.For more information about grassroots football refereeing in Berks & Bucks, please visit www.berks-bucksfa.com/referees Music by Darren Fellerdale.
Hugo speaks with Johno Whitaker, a Data Scientist/AI Researcher doing R&D with answer.ai. His current focus is on generative AI, flitting between different modalities. He also likes teaching and making courses, having worked with both Hugging Face and fast.ai in these capacities. Johno recently reminded Hugo how hard everything was 10 years ago: “Want to install TensorFlow? Good luck. Need data? Perhaps try ImageNet. But now you can use big models from Hugging Face with hi-res satellite data and do all of this in a Colab notebook. Or think ecology and vision models… or medicine and multimodal models!” We talk about where we've come from regarding tooling and accessibility for foundation models, ML, and AI, where we are, and where we're going. We'll delve into What the Generative AI mindset is, in terms of using atomic building blocks, and how it evolved from both the data science and ML mindsets; How fast.ai democratized access to deep learning, what successes they had, and what was learned; The moving parts now required to make GenAI and ML as accessible as possible; The importance of focusing on UX and the application in the world of generative AI and foundation models; The skillset and toolkit needed to be an LLM and AI guru; What they're up to at answer.ai to democratize LLMs and foundation models. LINKS The livestream on YouTube (https://youtube.com/live/hxZX6fBi-W8?feature=share) Zindi, the largest professional network for data scientists in Africa (https://zindi.africa/) A new old kind of R&D lab: Announcing Answer.AI (http://www.answer.ai/posts/2023-12-12-launch.html) Why and how I'm shifting focus to LLMs by Johno Whitaker (https://johnowhitaker.dev/dsc/2023-07-01-why-and-how-im-shifting-focus-to-llms.html) Applying AI to Immune Cell Networks by Rachel Thomas (https://www.fast.ai/posts/2024-01-23-cytokines/) Replicate -- a cool place to explore GenAI models, among other things (https://replicate.com/explore) Hands-On Generative AI with Transformers and Diffusion Models (https://www.oreilly.com/library/view/hands-on-generative-ai/9781098149239/) Johno on Twitter (https://twitter.com/johnowhitaker) Hugo on Twitter (https://twitter.com/hugobowne) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) SciPy 2024 CFP (https://www.scipy2024.scipy.org/#CFP) Escaping Generative AI Walled Gardens with Omoju Miller, a Vanishing Gradients Livestream (https://lu.ma/xonnjqe4)
To deliver effective legal services, lawyers need to be able to recognise and respond to their clients' trauma and take a more informed view of their clients' broader experiences. Here, we unpack how this can be done and why it is so essential. In this episode of The Lawyers Weekly Show, host Jerome Doraisamy speaks with Legal Aid NSW manager Jennifer Chen and lived experience advocate Rachel Thomas about what trauma-informed lawyering is, the introduction of a toolkit from the federal Attorney-General's Department and what it hopes to achieve, and the emergence of such an approach to legal services domestically and abroad. The trio also discuss the impact that a trauma-informed approach from a lawyer can have on clients in need, why it is so important (from a client perspective) for a lawyer to be trauma-informed, the potential consequences (for clients and court processes) if lawyers are not adequately trauma-informed, adhering to one's duties to clients and the court, overcoming scepticism about such an approach, and how best lawyers can better educate themselves on new ways of delivering legal services. If you like this episode, show your support by rating us or leaving a review on Apple Podcasts (The Lawyers Weekly Show) and by following Lawyers Weekly on social media: Facebook, Twitter and LinkedIn. If you have any questions about what you heard today, any topics of interest you have in mind, or if you'd like to lend your voice to the show, email editor@lawyersweekly.com.au for more insights!
"At 28 My marriage broke. So those days,to say that you're divorced was very difficult. At this point. My son was about ten, My daughter was about nine. They were young and I thank God that my mother and my sister was there to give me support and help me and you know, be there for me because I had to keep going for training . But to lift my head high and walk in Agra, the place where I was in was difficult." It is 1979, a small-town young lady, a mother of two small kids, just 23 years old, decides one evening at an Army Party, to join a Skydiving Course at Agra. Unknowingly history was created. Rachel Thomas was the first Indian woman to compete for India in a skydiving competition in 1987. After 23 years, she ended her career in 2002, when she skydived from 7,000 ft over the North Pole also creating the record of being the first Indian female to skydive over the North Pole. During her career, Rachel has completed 650 jumps in 18 countries. She has won multiple awards including the winner of the National Adventure Sports Award and is a TedX speaker. In 2005, Rachel was honoured again by the Government of India, with the fourth highest Indian civilian award - the Padmashree To stay up to date, follow @SmitaTharoor on Smita Tharoor (@SmitaTharoor) / Twitter or Smita Tharoor (@smitatharoor) | Instagram and follow the podcast on your favorite streaming service.
PARANORMAL TRAVEL; OUR MOST HAUNTED EXPERIENCES Have you been? Are you going? Have you had similar experiences?DRY DRINKSLemon Ginger TeaCappuccino via Tassimo MachineEpisode SPONSORED by The Magic Oil Box LLCThe Golden Tiki Las Vegas, NVThe MOB MUSEUMThe Christmas House Rancho Cucamonga, CAThe Whaley House Old Town San Diego, CATales from the Haunted South by Tiya MilesCape Charles MuseumThe Invisible History of African Americans in Cape Charles, VA Star of IndiaPierre Biane WineryHeritage Museum OCLegends from the Pacific host Kamuela KaneshiroHaunted Attractions NetworkAudio Clips:Aloha OeAsteroid that Created Chesapeake BayCreepy Small World PianoBanana SpiderMyrtle's Plantation Most Haunted America Haunted Stanley Hotel Roni's actual Tour Guide!! Rachel Thomas!!School in Flames CarrieHaunted AtticShiningMagic Oil BoxHappy BirthdaSupport the showwww.FrolickingChronicles.comPatreon for exclusive contentYouTube Subscribe to our ChannelInstagram @FrolickingChronicles for updates & current eventsTikTok @ParanormalCocktails for FUN
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookKevin Donahue: LinkedIn | WebsiteSuren Markosian: LinkedIn | WebsiteJoin our premium community for expert support and advice on homeschoolingPremium members also get access to our Epic! classroom for homeschoolers. Teach Your Kids Podcast EpisodeHomeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 1Homeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 2Navigating Math with Curiosity: Jason Batterson & Jasmine Eyal on Beast AcademyLearning PlatformsEpic!BrainPOP | Jr BrainPoPBooks, Articles, Publications, and VideosRichard Was a Picker - Carolyn Beck Cat Ninja Children's Book Collection - Matthew CodyScaredy Monster - Meika HashimotoMathProdigy GameRelated ResourcesRemind Time-Codes00:00:00 — Manisha introduces the episode and her guests00:02:07 — Suren discusses the inspiration behind Epic!.00:04:06 — The unique challenge of engaging children in digital reading.00:07:00 — Kevin explains the shift of Epic! into the education sector.00:09:20 — Discussion on the challenges of raising funds for EdTech startups.00:13:41 — Insights into developing a child-friendly user interface.00:17:46 — Addressing content diversity and inclusivity on Epic!.00:21:30 — The balance between gamification and educational content.00:24:52 — Kevin shares his vision for children's holistic development.00:27:28 — The growing trend of consumerization in education technology.00:30:18 — Overcoming barriers to universal access to digital education.00:33:50 — The rewarding experience of making a positive impact through Epic!.00:37:24 — Advice for entrepreneurs in the educational space.00:39:57 — Personal learning journeys of the founders.00:43:21 — The importance of giving children control over their learning.00:47:46 — The vast reach of digital books compared to physical publishing.00:52:33 — The episode concludes with a summary of Epic!'s impact on children's digital reading habits and a final reflection on the importance of accessible and engaging educational technology. This podcast was recorded on Riverside and is made possible through a generous grant from the Vela Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.This site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links.
SELECTED LINKS FROM THE EPISODETeach Your Kids: LinkedIn | Website | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn, X | Instagram | FacebookMeredith Olson: VELA Education Fund | LinkedIn | XJoin our premium community with expert support and adviceDonate to the VELA Education FundTeach Your Kids PodcastsHomeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 1Homeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 2Microschooling with Iman Alleyne & Shiren RattiganAlycia Wright (Cultural Roots Homeschool Co-op): Building Homeschool Co-ops and Cultivating CommunityThe Homeschool Haven: Why Parents Are Choosing Brooklyn Apple AcademyBooks, Articles, and PublicationsCottageClass: A Microschool Hub That Connects Families With Small-Scale Teachers — & Takes Care of the Business Side – The 74OrganizationsVELA Education FundWalton Family Foundationyes. every kid. foundation.Koch Industries | Community Involvement & Philanthropy Koch Family Foundation | Unleashing PotentialStand Together | A Non-Profit Philanthropic Community Time Codes00:00:00 — Manisha Snoyer introduces Meredith Olson, discussing her significant role in the Vela Education Fund.00:02:14 — Meredith shares her journey from engineering to her involvement in education and philanthropy.00:07:00 — Meredith delves into her engagement with education policy and her approach to rethinking education.00:10:51 — The concept of permissionless education, discussing its significance and implications, is explained.00:11:14 — The establishment and mission of the Vela Education Fund are detailed by Meredith.00:15:18 — Discussion on the growth and impact of the Vela Education Fund during the pandemic.00:17:38 — Meredith speaks about the unique approach of trust-based funding in supporting educational entrepreneurs.00:20:24 — Meredith shares her perspective on the future of education, focusing on innovation and new educational paradigms.00:21:46 — The concept of permissionless education, discussing its significance and implications, is explained.00:29:20 — Meredith Olson encourages parents to trust their instincts in making educational choices for their children.00:35:25 — Manisha and Meredith highlight the significant benefits and growing acceptance of homeschooling and micro-schooling, and emphasize the crucial role of community and networking in supporting these educational approaches.00:42:04 — Advice for raising funds for homeschooling or micro-school initiatives.00:47:07 — Reflections on the need for educational models to evolve with technology are shared.00:50:50 — The podcast concludes with Meredith's final thoughts and additional information about the VELA Education Fund. This podcast was recorded on Riverside and is made possible through a generous grant from the VELA Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.This site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links.
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookJoin our premium community with expert support and adviceFill out the Teach Your Kids surveyTeach Your Kids Curriculum Planner Teach Your Kids Podcast EpisodesHomeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 1Homeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 2Raising Indistractable Kids: Nir Eyal's Unconventional Approach to HomeschoolingBut What About Socialization?Manisha Snoyer's Expert Tips for Tailoring Your Child's Education: Navigating the Curriculum MazeSamantha Snowden from Headspace: Meditating with KidsJeremy Howard's Journey: From Traditional Schools to HomeschoolingTeach Your Kids Blog PostsMastery Hours: Core Subjects for Your Power HoursOptimize online tutoring with these easy tipsSparking Independent Learning with Strewing | Modulo - Lesley GrossblattBut what about "Socialization"?Find the perfect homeschool curriculum for your unique child | ModuloThe Best PreK-12th Grade Math Curriculum of 2023Books, Articles, and PublicationsA People's History of the United States - Howard Zinn Mastery Learning - Benjamin BloomStarting Strong IV Early Childhood Education and Care Data Country Note: FinlandAssessments & Evaluating ProgressIs your child on track?Map Growth | Homeschool BossDecolonizing History ResourcesStanford History Education GroupJOY HAKIMBlossom & RootEarly YearsKindergarten1st Grade2nd Grade3rd Grade4th Grade5th GradeEducational CommunitiesSEA HomeschoolersSEA Homeschoolers FacebookRelated ResourcesOutschoolHeadspace This site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links. Time-Codes00:00:00 — Manisha Snoyer introduces the episode topic: Homeschooling Community Q&A00:01:42 — Discussion on scheduling and time management in homeschooling00:13:55 — The importance of math and English language arts as foundational subjects, integrating practical life skills into the day, and starting teaching based on the child's readiness00:15:58 — Strategies for homeschooling multiple children, including personalized one-on-one time for each child00:21:20 — Manisha discusses the challenges of administrative tasks in homeschooling and suggests solutions such as swapping childcare with other parents00:31:49 — Discussing natural consequences in learning, such as budgeting with an allowance and the importance of volunteer work for understanding privilege and real-world issues00:38:06 — Addressing socialization in homeschooling, the benefits of homeschooling communities, and strategies for making friends in new locations00:47:01 — Manisha discusses resources for decolonizing homeschooling 00:52:40 — The importance of choosing the right curriculum to keep children motivated and the value of allowing children to have downtime, even if it leads to periods of boredom00:54:25 — Discussing strategies for homeschooling children with special needs00:56:24 — Manisha talks about different types of screen time, ranging from educational to entertainment-focused, and the importance of balancing and adding valuable screen activities to a child's routine01:04:08 – Manisha addresses the limitations of traditional schooling, emphasizing the benefits of homeschooling and the need to make a clear choice between the two01:07:09 — Conclusion: Manisha encourages listeners to reach out with their questions and participate in the homeschooling community for support and guidanceThis podcast is made possible through a generous grant from the Vela Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org
Today we talk about pooh and wee and what happens for children emotionally and physically when there are difficulties. Rachel Thomas is the lead specialist nurse for Children's Bladder and Bowel Team for Gloucestershire. She talks us through the early and late interventions. There are lots of laughs but lots of great info too. She recommends: https://eric.org.uk to answer all your questions. and the book, Amy Gets Eaten, by Adam Kay Email me if you're thinking about the day or evening course: Madeleinestani@icloud.com For your copy of Parenting For Life, pop here For a parenting chat/consultation, pop here Find me on IG @thecourageousmumma
Alberta Conservative MP Damien Kurek was kicked out of the House of Commons this week for using unparliamentary language. Within minutes of his expulsion, Kurek had the video of his outburst up on social media proudly showcasing his outrage at the Liberal government. Last week, another Alberta Conservative, Rachel Thomas, was forced to apologize after requesting Heritage Minister Pascale St-Onge stop answering her questions in French and instead use English. Thomas wanted a social media clip her audience could understand. Social media is changing the way MPs are behaving on Parliament Hill and it's increasingly changing the way political parties court support. In this week's episode of “It's Political” we take a look at his issue from a number of different angles. First, Toronto Star columnist Susan Delacourt gives us an overview of how political communications has changed during her time in Ottawa. Then, MPs Kevin Waugh, Rob Oliphant and Stéphane Bergeron reflect on the demise of local media, where their constituents get their news, and how the new media landscape is changing the way MPs engage with one other. Later, I sit down with Canada Proud founder Jeff Ballingall, who worked with both Conservative Leader Pierre Poilievre and former leader Erin O'Toole, as well as Mélanie Richer, the former director of communications for NDP Leader Jagmeet Singh, and with Cameron Ahmad, Prime Minister Justin Trudeau's former director of communications. And finally, we'll hear about the impact an MP's social media campaign recently had on a member of the upper house, Senator Bernadette Clement. In this episode: Toronto Star national columnist Susan Delacourt, Saskatchewan Conservative MP Kevin Waugh, Ontario Liberal MP Rob Oliphant, Bloc Québecois MP Stéphane Bergeron, former Conservative media strategist and Mobilize Media president Jeff Ballingall, former director of communications for NDP Leader Jagmeet Singh and Earnscliffe senior consultant Mélanie Richer, Prime Minister Justin Trudeau's former director of communications Cameron Ahmad, and Ontario Independent Senator Bernadette Clement. Hosted by Althia Raj. Some of the clips this week were sourced from CPAC, The Senate, The House of Commons, CBC, Pierre Poilievre's Facebook page and Damien Kurek's X/Twitter account. This episode of “It's Political” was produced by Althia Raj and Michal Stein. Kevin Sexton mixed the program. Our theme music is by Isaac Joel.
Human trafficking is defined in international law as the use of fraud, force, or coercion to enslave people. It is a scourge that operates not just through physical coercion but through the devastating use of mind control, a method of psychological enslavement. Rebecca Bender is a courageous survivor and advocate who works tirelessly to empower survivors, train law enforcement, do expert witness work, and assist policymakers. She created the Rebecca Bender Initiative and the online Elevate Academy, which empowers trafficking survivors over the age of eighteen to move forward with their lives. She found encouragement and mentorship from Carissa Phelps and Rachel Thomas, who, with me, developed the Ending the Game ten-session program for trafficking survivors under the age of eighteen. Understanding the tactics traffickers use is critical for prevention, as well as escape and recovery. My discussions with experts and survivors are not just academic; they are practical resources that shine a light on the manipulative strategies used in trafficking. Knowledge of these tactics can be lifesaving for anyone at risk or currently trapped. In this exploration, we delve deep into the intricate web of human trafficking, examining the various manipulative tactics employed by traffickers and the significant role that education and empowerment play in the recovery process. Learn more about Steven Hassan and Freedom of Mind Resource Center. Visit freedomofmind.com Learn more about your ad choices. Visit megaphone.fm/adchoices
SELECTED LINKS FROM THE EPISODETeach Your Kids: LinkedIn | Website | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookJoin our premium community with expert support and advice InterviewsTeach Your Kids: Game-Based Learning: The Prodigy Approach with Rohan MahimkerTeach Your Kids: Navigating Math with Curiosity: Jason Batterson & Jasmine Eyal on Beast AcademyModulo's Interview with Rachel Tidd (Founder of Wild Learning) Books, Articles, and PublicationsMindset: The New Psychology of Success - Carol S. Dweck, PhDGrit: The Power of Passion and Perseverance - Angela DuckworthFemale teachers' math anxiety affects girls' math achievement Beilock, S. L., Gunderson, E. A., Ramirez, G., & Levine, S. C. (2010). Psychological and Cognitive Sciences The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring. Bloom, B. S. (1984). Education Researcher General Curriculum ResourcesAlbert Bandura's Social Learning TheorySEA HomeschoolersSEA Homeschoolers FacebookTutoringMastery Learning HourSchoolhouse Courses/WorkshopsThere's No Such Thing as Not a Math Person with Rachel Thomas, P.H.D Math CurriculumPreschoolHomerMath TangoElementaryBeast Academy OnlineRight Start MathWild MathProdigy GameSingapore MathKhan Academy kidsDragonBox Algebra 5+ DragonBox Algebra 12+Math AnticsHigh SchoolKhan Academy (PreK-12th grade)Art of Problem SolvingThinkwell HomeschoolThis site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links. Time Codes00:00:20 — Manisha Snoyer introduces the episode's theme about empowering parents in math education00:02:03 — Discussion on the impact of parents' beliefs on children's math learning capabilities00:04:41 — Manisha debunks the common myth that one needs to be a natural "math person" to succeed in math00:07:25 — The episode transitions to exploring collaborative math learning approaches between parents and children00:09:02 — Manisha shares various innovative strategies for teaching math at home 00:11:26 — The benefits of personalized, one-on-one tutoring in mathematics are highlighted00:13:55 — Manisha provides tips for choosing the right mastery-based math curriculum for children00:16:44 — The importance of selecting a math curriculum that aligns with a child's unique learning archetype is discussed00:19:33 — Examining community support and resources available for parents homeschooling their children00:23:20 — A comparison of different math curricula suited for various learning archetypes00:27:28 — Introduction to nature-based learning in mathematics00:31:01 — Recommendations for suitable math programs for middle and high school students00:34:21 — Manisha addresses common questions regarding the real-world importance of learning math00:39:43 — The real-world applications of math in diverse careers are discussed00:42:09 — The episode concludes with a summary and an invitation for further engagement on the topicThis podcast is made possible through a generous grant from the Vela Education Fund VELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookJoin our premium community with expert support and adviceTeach Your Kids Curriculum Planner General Curriculum ResourcesTeach Your Kids Curriculum PlannerThe Complete Guide to Secular Homeschool Curriculum50 Educational Apps Your Kids Will LoveMastery Hours: Core Subjects for Your Power HoursHow to afford homeschooling What's a typical homeschool day look like?Giving kids the time and space they need to teach themselves Special NeedsCognitive Diversity and homeschooling All-in-One CurriculumThe top 12 all-in-one secular homeschool curricula of 2023Khan Academy Critical thinking CoTime4LearningBuild Your LibraryTorchlightBlossom & RootEarly YearsKindergarten1st Grade2nd Grade3rd Grade4th Grade5th GradeMath The Best PreK-12th Grade Math Curriculum of 2023RightStart MathAn honest review of Right Start Math (this guide can help you navigate Right Start Math as it can be a bit tricky)There's No Such Thing as Not a Math Person with Rachel Thomas, P.H.DTeach Your Kids: Game-Based Learning: The Prodigy Approach with Rohan Mahimker Teach Your Kids: Navigating Math with Curiosity: Jason Batterson & Jasmine Eyal on Beast AcademyManisha's Interview with Richard Rusczyk (Co-founder of Beast Academy and AoPS)Beast Academy OnlineBeast Academy Comic BooksSingapore MathThinkwellWild MathProdigy Game LiteracyThe top 4 tools to teach your child to read in 2023NessyHomerTeach Your Child to Read in 100 Easy Lessons English Language ArtsEnglish Language Arts Nurturing Critical Thinkers Teach your kiddo to writeThe Ultimate Guide to Handwriting Curriculum in 2023Does grammar matter?Ameliorate your progeny's lexiconPublic Speaking for Kids Youtube ChannelsArt for Kids Hub - YouTubeCosmic Kids Yoga - YouTube Assessments & Evaluating ProgressIs your child on track?Map Growth | Homeschool BossMobyMax for Families OtherOur six favorite environmental science programs for kids (and grownups) in 2023.Teaching Kids Financial Literacy in the Age of SVB Non-profits offering free or discounted internet and devicesGive ComputersAFTRRSpectrum Internet Assist Program Related Teach Your Kids Podcast EpisodesRaising Indistractable Kids: Nir Eyal's Unconventional Approach to HomeschoolingRachel Thomas Unpacks Homeschooling: A Deep Dive into Screen Time, Apps, and Data EthicsBack to Homeschool: With the Founder of Secular Eclectic Academic (SEA) Homeschoolers - Blair LeeRaising Gifted Learners With Megan Cannella: Insights From a Gifted Learning Specialist About Identifying and Supporting Gifted KidsJeremy Howard's Journey: From Traditional Schools to HomeschoolingMiscellaneousSEA Homeschoolers FacebookWild LearningA nematode survived 46,000 years in permafrost : NPRHow to find and vet the best homeschool teachers This site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links. Time-Codes00:00:00 — Introduction: Manisha Snoyer dives into the art of customizing children's education.00:01:21 — The Two Pillars: Importance of a main curriculum and a math supplement.00:03:57 — Keeping Kids Engaged: Manisha advocates for fun, interactive curricula.00:06:44 — Quality Control: Testing curricula for accuracy and reliability.00:08:08 — Universe Explored: The beauty of curiosity and scientific understanding.00:10:46 — Inclusive Learning: Manisha talks about diversifying history and literature materials.00:14:30 — A Cautionary Note: Manisha warns against curriculum biases.00:17:00 — Urgency underlined: The importance of teaching climate change.00:23:37 — Learning Disabilities: How multiple disabilities affect curriculum choice.00:26:22 — Game On!: The role of video games in curriculum selection.00:29:46 — Empowering Parents: Trusting your instincts for your child's educational needs.00:32:20 — Frugal Tips: Manisha offers ways to afford homeschooling.00:37:00 — Fundamental Focus: The emphasis on math and literacy.00:40:27 — Self-Directed Learning: Incorporating child-led education into the schedule.00:49:00 — DIY Curriculum: How to create a tailored learning plan from various resources.This podcast is made possible through a generous grant from the Vela Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.
SELECTED LINKS FROM THE EPISODETeach Your Kids: LinkedIn | Website | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookJason Batterson: LinkedInJasmine Eyal: LinkedIn | WebsiteJoin our premium community with expert support and adviceTeach Your Kids Podcast EpisodesManisha's Interview with Richard Rusczyk (Co-founder of Beast Academy and AoPS)Homeschooling a Quantum Innovator: Meet 15-Year-Old JasmineRaising Gifted Learners With Megan Cannella: Insights From a Gifted Learning Specialist About Identifying and Supporting Gifted KidsTeach Your Kids Blog Posts50 Educational Apps Your Kids Will LoveThe Best PreK-12th Grade Math Curriculum of 2023A Comprehensive Review of Beast Academy by an experienced math teacher: Is it a good choice for your child?CurriculumBeast Academy BA Playground, Ages 4+ Art of Problem SolvingAlcumus Art of Problem Solving Initiative | BEAM - A program to help underserved students enter advanced study in mathematics.Introduction to Programming with Python Online Programming CourseFind the perfect homeschool curriculum for your unique child Cognitive Diversity and homeschoolingTyping.comBooks, Videos and ArticlesMath from Three to Seven: The Story of a Mathematical Circle for Preschoolers - Alexander ZvonkinAsk a Scientist: When Are Children Ready to Learn Abstract Math? (Description of conserving and non-conserving math skills.)There's No Such Thing as Not a Math Person with Rachel Thomas, P.H.DThis site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links. Time Codes00:00:20 — Introduction: Manisha introduces Jason Batterson, the co-founder of Beast Academy.00:02:40 — Unveiling Changes: Jason outlines updates in Beast Academy's online program.00:05:20 — Accessibility Goals: Discussion on making Beast Academy resources easier for schools and students.00:07:30 — The Double-edged Sword: Batterson weighs in on the challenges and benefits of the platform.00:11:37 — Who Benefits the Most? Batterson describes the ideal student for Beast Academy.00:14:20 — Manisha on Gamified Education: A look at how gamification has made math more enjoyable for kids.00:16:40 — When to Start Math: Jason talks about the right age for kids to start learning math.00:21:44 — The Anxiety of Competition: Jason discusses stress related to math contests.00:24:40 — Spotting Math Giftedness: Manisha asks Jason for signs of math giftedness in children.00:27:44 — Batterson's Parental Insight: Jason shares how he fuels his son's curiosity.00:30:04 — Small Victories: Discussing the role of small achievements in learning math.00:33:40 — The Cartoon Dilemma: Jason talks about the challenges of extending Beast Academy's cartoon style.00:37:15 — Deep Dive into Curriculum: Jason speaks on Beast Academy's curriculum and its complexity.00:43:10 — The Joy in Math: Discussing the sheer joy of problem-solving in mathematics.00:51:21 — Inclusivity in Education: Manisha talks about diagnosing gifted children in under-resourced families.00:56:42 — Networking for Outreach: Manisha and Jasmine discuss using networks to reach families in need.01:02:34 — Curriculum Writing 101: Jason provides tips on crafting math curriculum.01:07:20 — Empowering Kids: Manisha commends Jason's balanced approachThis podcast is made possible through a generous grant from the Vela Education Fund VELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.
SELECTED LINKS FROM THE EPISODETeach Your Kids: LinkedIn | Website | X | Instagram | Substack | Facebook | TikTokManisha: LinkedIn | X | Instagram | FacebookBritt Kjerstin Hamre | Faculty Profile | Teachers College, Columbia UniversityJoin our premium community with expert support and advice Teach Your Kids Podcast EpisodesHomeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 1Game-Based Learning: The Prodigy Approach with Rohan MahimkerUnlocking Homeschool Success: Julie Bogart on Teaching Writing at HomeThe Homeschool Haven: Why Parents Are Choosing Brooklyn Apple AcademyA Whole Child Approach With Bank Street Professor Deb Vilas: Transforming Child Life Care Teach Your Kids Blog PostsFamily involvement in education Mastery Hours: Core Subjects for Your Power Hours BooksWe Want to Do More Than Survive: Abolitionist Teaching and the Pursuit of Educational Freedom - Bettina LovePunished for Dreaming: How School Reform Harms Black Children and How We Heal - Bettina LoveSocial Studies for a Better World: An Anti-oppressive Approach for Elementary Educators - Noreen Naseem Rodriguez and Katy SwalwellOn Fire: The (Burning) Case for a Green New Deal - Naomi Klein Learning ResourcesCurriculum Brave WriterWritopia LabClasses and programsBrooklyn Apple Academy (The homeschool co-op in Brooklyn that Britt's son attended.) Miscellaneous Teachers College, Columbia UniversityBritt Kjerstin Hamre | Faculty Profile | Teachers College, Columbia UniversityTeachers College Inclusive Classrooms ProjectThis site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links.Time Codes:00:00 - Manisha introduces the episode and the highly qualified guest, Dr. Britt Hamre.00:01:20 - Manisha and Dr. Britt Hamre's past collaborations lend credibility to the episode.00:03:00 - Dr. Britt Hamre details the elementary inclusive program at Columbia Teachers College.00:05:40 - Dr. Britt Hamre talks about the importance of inquiry, curiosity, and continuous learning.00:09:24 - Manisha and Dr. Britt Hamre discuss the utility of teaching techniques like growth mindset and grit.00:11:20 - Dr. Britt Hamre discusses potential cultural biases associated with the concept of grit, referencing scholar Bettina Love.00:15:49 - Dr. Britt Hamre emphasizes aligning learning with children's passions and interests.00:17:54 - Dr. Britt Hamre shares her personal experience with her son's early reading abilities.00:22:03 - Both host and guest underline the need for fostering civic action and critical thinking in children's education.00:25:49 - Manisha and Dr. Britt Hamre discuss the educational concept of scaffolding.00:27:00 - Dr. Britt Hamre discusses her approach to curriculum development.00:31:00 - Manisha correlates teaching to tech industry practices, emphasizing the value of breaking down complex topics.00:34:40 - Dr. Britt Hamre discusses the value of trusting the child's natural pace in learning.00:37:00 - Dr. Britt Hamre shares how different schooling systems can fit different children, based on her own family's experience.00:40:00 - The conversation shifts towards the diversity in educational experiences and approaches.00:43:28 - Dr. Britt Hamre recommends the "Understanding by Design" approach to curriculum development.00:45:57 - The discussion dives into the importance of setting clear outcomes and measurements for learning.00:49:45 - Dr. Britt Hamre discusses her current educational focus areas, including anti-oppressive teaching methods.00:51:20 - The episode concludes by emphasizing the importance of community in homeschooling.This podcast is made possible through a generous grant from the Vela Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | FacebookManisha: LinkedIn | X | Instagram | FacebookRachel Thomas: Fast.ai | LinkedIn | X | FacebookJoin our premium community with expert support and adviceTeach Your Kids Blog Posts50 Educational Apps Your Kids Will LoveRachel ThomasFast.aiBlog PostsMy family's unlikely homeschooling journeyMy family's unlikely homeschooling journey - Hacker News threadBooksErin Hunter: Warrior booksMindset: The New Psychology of Success - Carol S. Dweck, PhDArticlesThe Plan to Stop Every Respiratory Virus at OnceChess players perform worse when air quality is poor – and other high-skilled workers could be affected tooInvesting in indoor air quality improvements in schools will reduce COVID transmission and help students learnPublicationsClonally expanded B cells in multiple sclerosis bind EBV EBNA1 and GlialCAMAI/Coding ResourcesPractical Deep Learning for CodersScratch | ScratchJrCourses/WorkshopsThere's No Such Thing as Not a Math Person with Rachel Thomas, P.H.DLearning Apps/PlatformsArt for Kids Hub - YouTubeLearn Chess with DragonBoxMiscellaneousMastery Learning HourThis site contains product affiliate links. We may receive a commission if you make a purchase after clicking on one of these links. Time Codes00:00:00 — Manisha introduces the episode and her guest, Rachel Thomas, highlighting her impressive background in AI, data ethics, and education.00:01:32 — Rachel shares her initial hesitation and surprise at the idea of homeschooling.00:03:00 — A fresh perspective on how homeschooling can differ from traditional schooling.00:05:00 — Omicron's influence on the family's return to homeschooling.00:09:36 — Family dynamics during the pandemic: Screen time becomes relational.00:15:37 — Discussion on tech gadgets that empower learners.00:17:40 — The evolution of educational apps, from Dragonbox Algebra to Slice Fractions.00:21:25 — Outdoor vs. Indoor activities: A balanced approach.00:28:20 — The importance of personalized education.00:33:16 — Deep learning's day-to-day applications explored.00:41:40 — Addressing the plague of poor math teaching.00:47:48 — The psychological risks of perfectionism in traditional schooling.00:52:00 — Breaking down the viral myth: viruses vs. healthy bacteria.00:59:00 — Homeschooling: It's not just for the "cognitive elite."01:05:35 — Manisha wraps up the episode by discussing the availability of free resources and grants for families. She emphasizes the importance of making education accessible to all. This podcast is made possible through a generous grant from the Vela Education FundVELA Education Fund is catalyzing a vibrant alternative education ecosystem. VELA provides trust-based funding to entrepreneurs, fosters community-building and knowledge-sharing, and increases visibility through storytelling that promotes cultural awareness and acceptance of the out-of-system space. Today, VELA serves the largest community of out-of-system education entrepreneurs in the country, with over 2,000 community members. About half of VELA's community members operate small learning environments, and the other half are ecosystem and community builders offering direct services and support across the out-of-system space. Learn more at velaedfund.org.
Thanks to the over 17,000 people who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out the State of AI Engineering survey! See our Community page for upcoming meetups in SF, Paris and NYC.This episode had good interest on Twitter.Fast.ai's “Practical Deep Learning” courses been watched by over >6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn't always the case. Being non-consensus and right In 2018, Jeremy and Sebastian Ruder published a paper on ULMFiT (Universal Language Model Fine-tuning), a 3-step transfer learning technique for NLP tasks: The paper demonstrated that pre-trained language models could be fine-tuned on a specific task with a relatively small amount of data to achieve state-of-the-art results. They trained a 24M parameters model on WikiText-103 which was beat most benchmarks.While the paper had great results, the methods behind weren't taken seriously by the community: “Everybody hated fine tuning. Everybody hated transfer learning. I literally did tours trying to get people to start doing transfer learning and nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning […] which I was convinced was not the right direction, but who's going to listen to me, cause as you said, I don't have a PhD, not at a university… I don't have a big set of computers to fine tune huge transformer models.”Five years later, fine-tuning is at the center of most major discussion topics in AI (we covered some like fine tuning vs RAG and small models fine tuning), and we might have gotten here earlier if Jeremy had OpenAI-level access to compute and distribution. At heart, Jeremy has always been “GPU poor”:“I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use.”This story is a good reminder of how some of the best ideas are hiding in plain sight; we recently covered RWKV and will continue to highlight the most interesting research that isn't being done in the large labs. Replacing fine-tuning with continued pre-trainingEven though fine-tuning is now mainstream, we still have a lot to learn. The issue of “catastrophic forgetting” and potential solutions have been brought up in many papers: at the fine-tuning stage, the model can forget tasks it previously knew how to solve in favor of new ones. The other issue is apparent memorization of the dataset even after a single epoch, which Jeremy covered Can LLMs learn from a single example? but we still don't have the answer to. Despite being the creator of ULMFiT, Jeremy still professes that there are a lot of open questions on finetuning:“So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do.”He now advocates for "continued pre-training" - maintaining a diversity of data throughout the training process rather than separate pre-training and fine-tuning stages. Mixing instructional data, exercises, code, and other modalities while gradually curating higher quality data can avoid catastrophic forgetting and lead to more robust capabilities (something we covered in Datasets 101).“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data….So yeah, that's now my view, is I think ULMFiT is the wrong approach. And that's why we're seeing a lot of these so-called alignment tax… I think it's actually because people are training them wrong.An example of this phenomena is CodeLlama, a LLaMA2 model finetuned on 500B tokens of code: while the model is much better at code, it's worse on generic tasks that LLaMA2 knew how to solve well before the fine-tuning. In the episode we also dive into all the places where open source model development and research is happening (academia vs Discords - tracked on our Communities list and on our survey), and how Jeremy recommends getting the most out of these diffuse, pseudonymous communities (similar to the Eleuther AI Mafia).Show Notes* Jeremy's Background* FastMail* Optimal Decisions* Kaggle* Enlitic* fast.ai* Rachel Thomas* Practical Deep Learning* fastai for PyTorch* nbdev* fastec2 (the underrated library we describe)* Can LLMs learn from a single example?* the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”.* Sebastian Ruder* Alec Radford* Sylvain Gugger* Stephen Merity* Chris Lattner* Modular.ai / Mojo* Jono Whittaker* Zeiler and Fergus paper* ULM Fit* DAWNBench* Phi-1* Code Llama* AlexNetTimestamps* [00:00:00] Intros and Jeremy's background* [00:05:28] Creating ULM Fit - a breakthrough in NLP using transfer learning* [00:06:32] The rise of GPT and the appeal of few-shot learning over fine-tuning* [00:10:00] Starting Fast.ai to distribute AI capabilities beyond elite academics* [00:14:30] How modern LMs like ChatGPT still follow the ULM Fit 3-step approach* [00:17:23] Meeting with Chris Lattner on Swift for TensorFlow at Google* [00:20:00] Continued pre-training as a fine-tuning alternative* [00:22:16] Fast.ai and looking for impact vs profit maximization* [00:26:39] Using Fast.ai to create an "army" of AI experts to improve their domains* [00:29:32] Fast.ai's 3 focus areas - research, software, and courses* [00:38:42] Fine-tuning memorization and training curve "clunks" before each epoch* [00:46:47] Poor training and fine-tuning practices may be causing alignment failures* [00:48:38] Academia vs Discords* [00:53:41] Jeremy's high hopes for Chris Lattner's Mojo and its potential* [01:05:00] Adding capabilities like SQL generation through quick fine-tuning* [01:10:12] Rethinking Fast.ai courses for the AI-assisted coding era* [01:14:53] Rapid model development has created major technical debt* [01:17:08] Lightning RoundAI Summary (beta)This is the first episode we're trying this. Here's an overview of the main topics before you dive in the transcript. * Jeremy's background and philosophies on AI* Studied philosophy and cognitive science in college* Focused on ethics and thinking about AI even 30 years ago* Believes AI should be accessible to more people, not just elite academics/programmers* Created fast.ai to make deep learning more accessible* Development of transfer learning and ULMFit* Idea of transfer learning critical for making deep learning accessible* ULMFit pioneered transfer learning for NLP* Proposed training general language models on large corpora then fine-tuning - this became standard practice* Faced skepticism that this approach would work from NLP community* Showed state-of-the-art results on text classification soon after trying it* Current open questions around fine-tuning LLMs* Models appear to memorize training data extremely quickly (after 1 epoch)* This may hurt training dynamics and cause catastrophic forgetting* Unclear how best to fine-tune models to incorporate new information/capabilities* Need more research on model training dynamics and ideal data mixing* Exciting new developments* Mojo and new programming languages like Swift could enable faster model innovation* Still lots of room for improvements in computer vision-like innovations in transformers* Small models with fine-tuning may be surprisingly capable for many real-world tasks* Prompting strategies enable models like GPT-3 to achieve new skills like playing chess at superhuman levels* LLMs are like computer vision in 2013 - on the cusp of huge new breakthroughs in capabilities* Access to AI research* Many key convos happen in private Discord channels and forums* Becoming part of these communities can provide great learning opportunities* Being willing to do real work, not just talk about ideas, is key to gaining access* The future of practical AI* Coding becoming more accessible to non-programmers through AI assistance* Pre-requisite programming experience for learning AI may no longer be needed* Huge open questions remain about how to best train, fine-tune, and prompt LLMsTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:21]Swyx: Hey, and today we have in the remote studio, Jeremy Howard all the way from Australia. Good morning. [00:00:27]Jeremy: The remote studio, also known as my house. Good morning. Nice to see you. [00:00:32]Swyx: Nice to see you too. I'm actually very used to seeing you in your mask as a message to people, but today we're mostly audio. But thank you for doing the very important public service of COVID awareness. It was a pleasure. [00:00:46]Jeremy: It was all very annoying and frustrating and tedious, but somebody had to do it. [00:00:52]Swyx: Somebody had to do it, especially somebody with your profile. I think it really drives home the message. So we tend to introduce people for them and then ask people to fill in the blanks on the personal side. Something I did not know about you was that you graduated with a BA in philosophy from the University of Melbourne. I assumed you had a PhD. [00:01:14]Jeremy: No, I mean, I barely got through my BA because I was working 80 to 100 hour weeks at McKinsey and Company from 19 years old onwards. So I actually didn't attend any lectures in second and third year university. [00:01:35]Swyx: Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. [00:01:39]Jeremy: I took two weeks off before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight, I would go to all my professors and say, oh, I was meant to be in your class this semester and I didn't quite turn up. Were there any assignments I was meant to have done, whatever. I can't believe all of them let me basically have it. They basically always would say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university, but. [00:02:12]Swyx: Well, it shows that, I guess, you min-maxed the opportunities. That definitely was a precursor. [00:02:18]Jeremy: I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was ethics and cognitive science. And it's kind of really amazing that it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. [00:02:38]Swyx: A lot of, yeah, a lot of relevant conversations there. So you were a consultant for a while and then in the magical month of June 1989, you founded both Optimal Decisions and Fastmeal, which I also briefly used. So thank you for that. [00:02:53]Jeremy: Oh, good for you. Yeah. Cause I had read the statistics, which is that like 90% or something of small businesses fail. So I thought if I start two businesses, I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd, but anyway. [00:03:10]Swyx: And then you were president and chief scientist at Kaggle, which obviously is the sort of composition platform of machine learning. And then Enlitic, where you were working on using deep learning to improve medical diagnostics and clinical decisions. Yeah. [00:03:28]Jeremy: I was actually the first company to use deep learning in medicine, so I kind of founded the field. [00:03:33]Swyx: And even now that's still like a pretty early phase. And I actually heard you on your new podcast with Tanish, where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. [00:03:47]Jeremy: Maybe he's too old to be called a prodigy now, ex-prodigy. No, no. [00:03:51]Swyx: I think he still counts. And anyway, just to round out the bio, you have a lot more other credentials, obviously, but most recently you started Fast.ai, which is still, I guess, your primary identity with Rachel Thomas. So welcome. [00:04:05]Jeremy: Yep. [00:04:06]Swyx: Thanks to my wife. Thank you. Yeah. Doing a lot of public service there with getting people involved in AI, and I can't imagine a better way to describe it than fast, fast.ai. You teach people from nothing to stable diffusion in seven weeks or something, and that's amazing. Yeah, yeah. [00:04:22]Jeremy: I mean, it's funny, you know, when we started that, what was that, like 2016 or something, the idea that deep learning was something that you could make more accessible was generally considered stupid. Everybody knew that deep learning was a thing that you got a math or a computer science PhD, you know, there was one of five labs that could give you the appropriate skills and that you would join, yeah, basically from one of those labs, you might be able to write some papers. So yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it. And we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs, that seemed like A, a big waste and B, kind of dangerous. [00:05:28]Swyx: Yeah. [00:05:29]Alessio: And, you know, well, I just wanted to know one thing on your bio that at Kaggle, you were also the top rank participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you were clearly building something that you knew a lot about, which is awesome. Talking about deep learning, you created, published a paper on ULM fit, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I've read back on the paper and you turned this model, AWD LSTM, which I did the math and it was like 24 to 33 million parameters, depending on what training data set you use today. That's kind of like not even small, it's like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and what people were working towards? [00:06:32]Jeremy: Yeah, the whole thing was a contrarian take, you know. So okay, so we started Fast.ai, my wife and I, and we thought, yeah, so we're trying to think, okay, how do we make it more accessible? So when we started thinking about it, it was probably 2015 and then 2016, we started doing something about it. Why is it inaccessible? Okay, well, A, no one knows how to do it other than a few number of people. And then when we asked those few number of people, well, how do you actually get good results? They would say like, oh, it's like, you know, a box of tricks that aren't published. So you have to join one of the labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but thankfully there was Theano and rappers and particularly Lasagna, the rapper, but yeah, not much software around, not much in the way of data sets, you know, very hard to get started in terms of the compute. Like how do you get that set up? So yeah, no, everything was kind of inaccessible. And you know, as we started looking into it, we had a key insight, which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about, nobody talks about called transfer learning, where you take somebody else's model, where they already figured out like how to detect edges and gradients and corners and text and whatever else, and then you can fine tune it to do the thing you want to do. And we thought that's the key. That's the key to becoming more accessible in terms of compute and data requirements. So when we started Fast.ai, we focused from day one on transfer learning. Lesson one, in fact, was transfer learning, literally lesson one, something not normally even mentioned in, I mean, there wasn't much in the way of courses, you know, the courses out there were PhD programs that had happened to have recorded their lessons and they would rarely mention it at all. We wanted to show how to do four things that seemed really useful. You know, work with vision, work with tables of data, work with kind of recommendation systems and collaborative filtering and work with text, because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And no one was doing anything much useful with text. Everybody was talking about word2vec, you know, like king plus queen minus woman and blah, blah, blah. It was like cool experiments, but nobody's doing anything like useful with it. NLP was all like lemmatization and stop words and topic models and bigrams and SPMs. And it was really academic and not practical. But I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done cognitive science at university, where we talked a lot about the CELS Chinese room experiment. This idea of like, what if there was somebody that could kind of like, knew all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese and they were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them and then they do all their rules and then they pass back a piece of paper with Chinese back. And this room with a person in is actually fantastically good at answering any question you give them written in Chinese. You know, do they understand Chinese? And is this, you know, something that's intelligently working with Chinese? Ever since that time, I'd say the most thought, to me, the most thoughtful and compelling philosophical response is yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But you know, if it looks like a duck and acts like a duck, it's a duck, you know, or to all intents and purposes. And so I always kind of thought, you know, so this is basically a kind of analysis of the limits of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw to then generate text in response to text, it could appear to be intelligent, you know. And whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know, what I learned about the universal approximation theorem and stuff, and I started thinking like, oh, I wonder if like a neural net could ever get big enough and take in enough data to be a Chinese room experiment. You know, with that background and this kind of like interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, oh, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing, you know, they were not a popular or well-known thing, but they were a thing. But language models exist to this idea that you could train a model to fill in the gaps. Or actually in those days it wasn't fill in the gaps, it was finish a string. And in fact, Andrej Karpathy did his fantastic RNN demonstration from this at a similar time where he showed like you can have it ingest Shakespeare and it will generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia effectively, to do it quite accurately quite often? I thought, geez, it would actually have to know a lot about the world, you know, it'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways and that causes proceed effects and that, you know, when there are animals and there are people and that people can be in certain positions during certain timeframes and then you could, you know, all that together, you can then finish a sentence like this was signed into law in 2016 by US President X and it would fill in the gap, you know. So that's why I tried to create what in those days was considered a big language model trained on the entirety on Wikipedia, which is that was, you know, a bit unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure, based on our work with transfer learning and vision, that I could then suck out those latent capabilities by transfer learning, you know, by fine-tuning it on a task data set or whatever. So we generated this three-step system. So step one was train a language model on a big corpus. Step two was fine-tune a language model on a more curated corpus. And step three was further fine-tune that model on a task. And of course, that's what everybody still does today, right? That's what ChatGPT is. And so the first time I tried it within hours, I had a new state-of-the-art academic result on IMDB. And I was like, holy s**t, it does work. And so you asked, to what degree was this kind of like pushing against the established wisdom? You know, every way. Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like, oh, maybe. Everybody was like, it definitely won't work. NLP is much more complicated than vision. Language is a much more vastly complicated domain. You know, and you've got problems like the grounding problem. We know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. [00:15:10]Alessio: Jeremy, had people not tried because it was like too complicated to actually get the data and like set up the training? Or like, were people just lazy and kind of like, hey, this is just not going to work? [00:15:20]Jeremy: No, everybody wasn't lazy. So like, so the person I thought at that time who, you know, there were two people I thought at that time, actually, who were the strongest at language models were Stephen Merity and Alec Radford. And at the time I didn't know Alec, but I, after we had both, after I'd released ULM Fit and he had released GPT, I organized a chat for both of us with Kate Metz in the New York Times. And Kate Metz answered, sorry, and Alec answered this question for Kate. And Kate was like, so how did, you know, GPT come about? And he said, well, I was pretty sure that pre-training on a general large corpus wouldn't work. So I hadn't tried it. And then I read ULM Fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similar with, with Stephen, you know, I asked Stephen Merity, like, why don't we just find, you know, take your AWD-ASTLM and like train it on all of Wikipedia and fine tune it? And he's kind of like, well, I don't think that's going to really lie. Like two years before I did a very popular talk at KDD, the conference where everybody in NLP was in the audience. I recognized half the faces, you know, and I told them all this, I'm sure transfer learning is the key. I'm sure ImageNet, you know, is going to be an NLP thing as well. And, you know, everybody was interested and people asked me questions afterwards and, but not just, yeah, nobody followed up because everybody knew that it didn't work. I mean, even like, so we were scooped a little bit by Dai and Lee, Kwok Lee at Google. They had, they had, I already, I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose, large language model on a general purpose corpus. They only ever tested a domain specific corpus. And I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it was that kind of 30 years of mulling over the, the cell Chinese room experiment that had convinced me that it probably would work. I don't know. Yeah. [00:17:48]Alessio: Interesting. I just dug up Alec announcement tweet from 2018. He said, inspired by Cobe, Elmo, and Yola, I'm fit. We should have a single transformer language model can be fine tuned to a wide variety. It's interesting because, you know, today people think of AI as the leader, kind of kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years, people think of it as an overnight success, but obviously it took a while. [00:18:16]Swyx: Yeah. Yeah. [00:18:17]Jeremy: No, I mean, absolutely. And I'll say like, you know, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to, to ULM fit. You know, there was these kind of like, so there was a lot of, there was a lot of activity at the same time as ULM fits released. So there was, um, so before it, as Brian McCann, I think at Salesforce had come out with this neat model that did a kind of multitask learning, but again, they didn't create a general fine tune language model first. There was Elmo, um, which I think was a lip, you know, actually quite a few months after the first ULM fit example, I think. Um, but yeah, there was a bit of this stuff going on. And the problem was everybody was doing, and particularly after GPT came out, then everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning. Everybody hated transfer learning. And like, I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years and, and not to be honest, I mean, I'm a bit sad about this now, but I kind of got so disappointed and dissuaded by like, it felt like these bigger lab, much bigger labs, you know, like fast AI had only ever been just me and Rachel were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced was the wrong way to do it. And so, yeah, for years people were really focused on getting better at zero shot and few shots and it wasn't until, you know, this key idea of like, well, let's take the ULM fit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's fine tune on an instruction corpus. And then in step three, rather than fine tuning on a reasonably specific task classification, let's fine tune on a, on a RLHF task classification. And so that was really, that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction, but who's going to listen to me, you know, cause I, as you said, I don't have a PhD, not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So yeah, it was definitely difficult. It's always been hard. You know, it's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata and it'd be like, oh, this big bank has this huge room full of computers and they have like terabytes of data available, you know, at the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then of course, students coming out of their PhDs and stuff, that's where they want to go work because that's where they read about. And to me, it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. [00:22:16]Alessio: That's awesome. And it's great to hear, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love this conversation. And the other thing before we jump into Fast.AI, actually, a lot of people that I know, they run across a new architecture and whatnot, they're like, I got to start a company and raise a bunch of money and do all of this stuff. And say, you were like, I want everybody to have access to this. Why was that the case for you? Was it because you already had a successful venture in like FastMail and you were more interested in that? What was the reasoning? [00:22:52]Jeremy: It's a really good question. So I guess the answer is yes, that's the reason why. So when I was a teenager, I thought it would be really cool to like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word VC. And I didn't really know what any of those things were really until after we started Kaggle, to be honest. Even the way it started to what we now call startups. I just thought they were just small businesses. You know, they were just companies. So yeah, so those two companies were FastMail and Optimal Decisions. FastMail was the first kind of synchronized email provider for non-businesses. So something you can get your same email at home, on your laptop, at work, on your phone, whatever. And then Optimal Decisions invented a new approach to insurance pricing. Something called profit-optimized insurance pricing. So I saw both of those companies, you know, after 10 years. And at that point, I had achieved the thing that as a teenager I had wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mom saying to me, you must be so proud. You know, because she remembered my dream. She's like, you've done it. And I kind of reflected and I was like, I'm not proud at all. You know, like people quite liked FastMail. You know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah, I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually, I haven't really helped the world very much. You know, maybe in the insurance case I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure would have a lot of value. So, you know, I took some time off. I wasn't sure if I'd ever work again, actually. I didn't particularly want to, because it felt like, yeah, it felt like such a disappointment. And, but, you know, and I didn't need to. I had enough money. Like, I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work, they all worked ridiculously hard, you know, and constantly put themselves in extremely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, Kaggle came along and I mainly kind of did that just because it was fun and interesting to hang out with interesting people. But, you know, with Fast.ai in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know? And so we definitely saw the world going in a potentially pretty dystopian direction if the world's most powerful technology was controlled by a small group of elites. So we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've helped make it a little bit less likely. So we've done our bit. [00:26:39]Swyx: You've shown that it's possible. And I think your constant advocacy, your courses, your research that you publish, you know, just the other day you published a finding on, you know, learning that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howards, furthering your mission with, you know, you don't have to do everything by yourself is what I'm saying. No, definitely. Definitely. [00:27:10]Jeremy: You know, that was a big takeaway from like, analytic was analytic. It definitely felt like we had to do everything ourselves. And I kind of, I wanted to solve medicine. I'll say, yeah, okay, solving medicine is actually quite difficult. And I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was definitely the other piece was like, yeah, you know, can we create an army of passionate domain experts who can change their little part of the world? And that's definitely happened. Like I find nowadays, at least half the time, probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say, they say, yeah, I got my start with fast.ai. So it's definitely, I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff, which, you know, there's lots of alumni there. And they say, oh my God, I got here. And like half of the people are fast.ai alumni. So it's fantastic. [00:28:13]Swyx: Yeah. [00:28:14]Jeremy: Actually, Andre Kapathy grabbed me when I saw him at NeurIPS a few years ago. And he was like, I have to tell you, thanks for the fast.ai courses. When people come to Tesla and they need to know more about deep learning, we always send them to your course. And the OpenAI Scholars Program was doing the same thing. So it's kind of like, yeah, it's had a surprising impact, you know, that's just one of like three things we do is the course, you know. [00:28:40]Swyx: Yes. [00:28:40]Jeremy: And it's only ever been at most two people, either me and Rachel or me and Sylvia nowadays, it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to make an impact. [00:28:56]Swyx: Yeah. So just to reintroduce fast.ai for people who may not have dived into it much, there is the courses that you do. There is the library that is very well loved. And I kind of think of it as a nicer layer on top of PyTorch that people should start with by default and use it as the basis for a lot of your courses. And then you have like NBDev, which I don't know, is that the third one? [00:29:27]Jeremy: Oh, so the three areas were research, software, and courses. [00:29:32]Swyx: Oh, sorry. [00:29:32]Jeremy: So then in software, you know, fast.ai is the main thing, but NBDev is not far behind. But then there's also things like FastCore, GHAPI, I mean, dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden, actually. Some of them I should try to do a better job of telling people about. [00:30:01]Swyx: What are you thinking about? Yeah, what's on the course of my way? Oh, I don't know, just like little things. [00:30:04]Jeremy: Like, for example, for working with EC2 and AWS, I created a FastEC2 library, which I think is like way more convenient and nice to use than anything else out there. And it's literally got a whole autocomplete, dynamic autocomplete that works both on the command line and in notebooks that'll like auto-complete your instance names and everything like that. You know, just little things like that. I try to make like, when I work with some domain, I try to make it like, I want to make it as enjoyable as possible for me to do that. So I always try to kind of like, like with GHAPI, for example, I think that GitHub API is incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So like GHAPI, like FastEC2, it like autocompletes both at the command line or in a notebook or whatever, like literally the entire GitHub API. The entire thing is like, I think it's like less than 100K of code because it actually, as far as I know, the only one that grabs it directly from the official open API spec that GitHub produces. And like if you're in GitHub and you just type an API, you know, autocomplete API method and hit enter, it prints out the docs with brief docs and then gives you a link to the actual documentation page. You know, GitHub Actions, I can write now in Python, which is just so much easier than writing them in TypeScript and stuff. So, you know, just little things like that. [00:31:40]Swyx: I think that's an approach which more developers took to publish some of their work along the way. You described the third arm of FastAI as research. It's not something I see often. Obviously, you do do some research. And how do you run your research? What are your research interests? [00:31:59]Jeremy: Yeah, so research is what I spend the vast majority of my time on. And the artifacts that come out of that are largely software and courses. You know, so to me, the main artifact shouldn't be papers because papers are things read by a small exclusive group of people. You know, to me, the main artifacts should be like something teaching people, here's how to use this insight and here's software you can use that builds it in. So I think I've only ever done three first-person papers in my life, you know, and none of those are ones I wanted to do. You know, they were all ones that, like, so one was ULM Fit, where Sebastian Ruder reached out to me after seeing the course and said, like, you have to publish this as a paper, you know. And he said, I'll write it. He said, I want to write it because if I do, I can put it on my PhD and that would be great. And it's like, okay, well, I want to help you with your PhD. And that sounds great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the Fast.ai library paper, which again, somebody reached out and said, please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't, other than that, I've never written a first author paper. So the research is like, well, so for example, you know, Dawn Bench was a competition, which Stanford ran a few years ago. It was kind of the first big competition of like, who can train neural nets the fastest rather than the most accurate. And specifically it was who can train ImageNet the fastest. And again, this was like one of these things where it was created by necessity. So Google had just released their TPUs. And so I heard from my friends at Google that they had put together this big team to smash Dawn Bench so that they could prove to people that they had to use Google Cloud and use their TPUs and show how good their TPUs were. And we kind of thought, oh s**t, this would be a disaster if they do that, because then everybody's going to be like, oh, deep learning is not accessible. [00:34:20]Swyx: You know, to actually be good at it, [00:34:21]Jeremy: you have to be Google and you have to use special silicon. And so, you know, we only found out about this 10 days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next 10 days and just tried to crunch through and try to use all of our best ideas that had come from our research. And so particularly progressive resizing, just basically train mainly on small things, train on non-square things, you know, stuff like that. And so, yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, oh s**t, you know, this is going to show that you have to be Google and have TPUs to being like, oh my God, even the little guy can do deep learning. So that's an example of the kind of like research artifacts we do. And yeah, so all of my research is always, how do we do more with less, you know? So how do we get better results with less data, with less compute, with less complexity, with less education, you know, stuff like that. So ULM fits obviously a good example of that. [00:35:37]Swyx: And most recently you published, can LLMs learn from a single example? Maybe could you tell the story a little bit behind that? And maybe that goes a little bit too far into the learning of very low resource, the literature. [00:35:52]Jeremy: Yeah, yeah. So me and my friend, Jono Whittaker, basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is, can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within nine hours. And Kaggle's very, very limited. So you've only got 14 gig RAM, only two CPUs, and a small, very old GPU. So this is cool, you know, if you can do well at this, then this is a good example of like, oh, you can do more with less. So yeah, Jono and I were playing around with fine tuning, of course, transfer learning, pre-trained language models. And we saw this, like, so we always, you know, plot our losses as we go. So here's another thing we created. Actually, Sylvain Guuger, when he worked with us, created called fast progress, which is kind of like TQEDM, but we think a lot better. So we look at our fast progress curves, and they kind of go down, down, down, down, down, down, down, a little bit, little bit, little bit. And then suddenly go clunk, and they drop. And then down, down, down, down, down a little bit, and then suddenly clunk, they drop. We're like, what the hell? These clunks are occurring at the end of each epoch. So normally in deep learning, this would be, this is, you know, I've seen this before. It's always been a bug. It's always turned out that like, oh, we accidentally forgot to turn on eval mode during the validation set. So I was actually learning then, or, oh, we accidentally were calculating moving average statistics throughout the epoch. So, you know, so it's recently moving average or whatever. And so we were using Hugging Face Trainer. So, you know, I did not give my friends at Hugging Face the benefit of the doubt. I thought, oh, they've fucked up Hugging Face Trainer, you know, idiots. Well, you'll use the Fast AI Trainer instead. So we switched over to Learner. We still saw the clunks and, you know, that's, yeah, it shouldn't really happen because semantically speaking in the epoch, isn't like, it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one. So there shouldn't be a clunk, you know. So I kind of asked around on the open source discords. That's like, what's going on here? And everybody was just like, oh, that's just what, that's just what these training curves look like. Those all look like that. Don't worry about it. And I was like, oh, are you all using Trainer? Yes. Oh, well, there must be some bug with Trainer. And I was like, well, we also saw it in Learner [00:38:42]Swyx: and somebody else is like, [00:38:42]Jeremy: no, we've got our own Trainer. We get it as well. They're just like, don't worry about it. It's just something we see. It's just normal. [00:38:48]Swyx: I can't do that. [00:38:49]Jeremy: I can't just be like, here's something that's like in the previous 30 years of neural networks, nobody ever saw it. And now suddenly we see it. [00:38:57]Swyx: So don't worry about it. [00:38:59]Jeremy: I just, I have to know why. [00:39:01]Swyx: Can I clarify? This is, was everyone that you're talking to, were they all seeing it for the same dataset or in different datasets? [00:39:08]Jeremy: Different datasets, different Trainers. They're just like, no, this is just, this is just what it looks like when you fine tune language models. Don't worry about it. You know, I hadn't seen it before, but I'd been kind of like, as I say, I, you know, I kept working on them for a couple of years after ULM fit. And then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of Lama fine tuning world, where else these guys had been, you know, doing it since day one. [00:39:49]Swyx: It was only a few months ago, [00:39:51]Jeremy: but it's still quite a bit of time. So, so yeah, they're just like, no, this is all what we see. [00:39:56]Swyx: Don't worry about it. [00:39:56]Jeremy: So yeah, I, I've got a very kind of like, I don't know, I've just got this brain where I have to know why things are. And so I kind of, I ask people like, well, why, why do you think it's happening? And they'd be like, oh, it would pretty obviously, cause it's like memorize the data set. It's just like, that can't be right. It's only seen it once. Like, look at this, the loss has dropped by 0.3, 0.3, which is like, basically it knows the answer. And like, no, no, it's just, it is, it's just memorize the data set. So yeah. So look, Jono and I did not discover this and Jono and I did not come up with a hypothesis. You know, I guess we were just the ones, I guess, who had been around for long enough to recognize that like, this, this isn't how it's meant to work. And so we, we, you know, and so we went back and like, okay, let's just run some experiments, you know, cause nobody seems to have actually published anything about this. [00:40:51]Well, not quite true.Some people had published things, but nobody ever actually stepped back and said like, what the hell, you know, how can this be possible? Is it possible? Is this what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time. It's like, okay, if this hypothesis is correct, that it's memorized in the training set, then we ought to see blah, under conditions, blah, but not under these conditions. And so we ran a bunch of experiments and all of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big data set, you know, which in hindsight, it's not totally surprising because the theory, remember, of the ULMFiT theory was like, well, it's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize it as like, oh, that's just a version of this. So it's not so crazy, you know, but it is, it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because like, it doesn't even matter. Like maybe it's fine. Like maybe it's fine that it's memorized the data set after one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. [00:42:20]Swyx: That's fine. [00:42:22]Jeremy: Don't, you know, don't, I keep telling people, don't track validation loss, track validation accuracy because at least that will still be useful. Just another thing that's got lost since ULMFiT, nobody tracks accuracy of language models anymore. But you know, it'll still keep learning and it does, it does keep improving. But is it worse? You know, like, is it like, now that it's kind of memorized it, it's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do, like nobody really knows whether this memorization thing is, it's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably, yeah, I have a feeling it's messing up training dynamics as well. [00:43:13]Swyx: And does it come at the cost of catastrophic forgetting as well, right? Like, which is the other side of the coin. [00:43:18]Jeremy: It does to some extent, like we know it does, like look at Code Llama, for example. So Code Llama was a, I think it was like a 500 billion token fine tuning of Llama 2 using code. And also pros about code that Meta did. And honestly, they kind of blew it because Code Llama is good at coding, but it's bad at everything else, you know, and it used to be good. Yeah, I was pretty sure it was like, before they released it, me and lots of people in the open source discords were like, oh my God, you know, we know this is coming, Jan Lukinsk saying it's coming. I hope they kept at least like 50% non-code data because otherwise it's going to forget everything else. And they didn't, only like 0.3% of their epochs were non-code data. So it did, it forgot everything else. So now it's good at code and it's bad at everything else. So we definitely have catastrophic forgetting. It's fixable, just somebody has to do, you know, somebody has to spend their time training a model on a good mix of data. Like, so, okay, so here's the thing. Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it. [00:44:36]Jeremy: And that's because people are using it in a way different to why I created it. You know, I created it thinking the task-specific models would be more specific. You know, it's like, oh, this is like a sentiment classifier as an example of a task, you know, but the tasks now are like a, you know, RLHF, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task and it's a really cool approach. And so we see, for example, RLHF also breaks models like, you know, like GPT-4, RLHDEFT, we know from kind of the work that Microsoft did, you know, the pre, the earlier, less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me, the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter, you stop training on low quality data, because that's probably fine to forget how to write badly, maybe. So yeah, that's now my view, is I think ULM fit is the wrong approach. And that's why we're seeing a lot of these, you know, so-called alignment tacks and this view of like, oh, a model can't both code and do other things. And, you know, I think it's actually because people are training them wrong. [00:46:47]Swyx: Yeah, well, I think you have a clear [00:46:51]Alessio: anti-laziness approach. I think other people are not as good hearted, you know, they're like, [00:46:57]Swyx: hey, they told me this thing works. [00:46:59]Alessio: And if I release a model this way, people will appreciate it, I'll get promoted and I'll kind of make more money. [00:47:06]Jeremy: Yeah, and it's not just money. It's like, this is how citations work most badly, you know, so if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot and few shot learning, everybody was writing about that. Or, you know, with image generation, everybody just was writing about GANs, you know, and I was trying to say like, no, GANs are not the right approach. You know, and I showed again through research that we demonstrated in our videos that you can do better than GANs, much faster and with much less data. And nobody cared because again, like if you want to get published, you write a GAN paper that slightly improves this part of GANs and this tiny field, you'll get published, you know. So it's, yeah, it's not set up for real innovation. It's, you know, again, it's really helpful for me, you know, I have my own research lab with nobody telling me what to do and I don't even publish. So it doesn't matter if I get citations. And so I just write what I think actually matters. I wish there was, and, you know, and actually places like OpenAI, you know, the researchers there can do that as well. It's a shame, you know, I wish there was more academic, open venues in which people can focus on like genuine innovation. [00:48:38]Swyx: Twitter, which is unironically has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discords. I don't know if it's too, I don't know if it's a pusher to ask like what discords are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of Luther AI, which is a very hard bit. And, you know, like what is the new Luther? And you actually shouted out the alignment lab AI discord in your blog post. And that was the first time I even knew, like I saw them on Twitter, never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay, yeah. [00:49:23]Jeremy: And then even then, if you do know about that and you go there, it'll look like it's totally dead. And that's because unfortunately, nearly all the discords, nearly all of the conversation happens in private channels. You know, and that's, I guess. [00:49:35]Swyx: How does someone get into that world? Because it's obviously very, very instructive, right? [00:49:42]Jeremy: You could just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so like, at least, to be fair, one of Emma's bustling channels is private. [00:49:57]Swyx: I guess. [00:49:59]Jeremy: So I'm just thinking. [00:50:01]Swyx: It's just the nature of quality discussion, right? Yeah, I guess when I think about it, [00:50:05]Jeremy: I didn't have any private discussions on our discord for years, but there was a lot of people who came in with like, oh, I just had this amazing idea for AGI. If you just thought about like, if you imagine that AI is a brain, then we, you know, this just, I don't want to talk about it. You know, I don't want to like, you don't want to be dismissive or whatever. And it's like, oh, well, that's an interesting comment, but maybe you should like, try training some models first to see if that aligns with your intuition. Like, oh, but how could I possibly learn? It's like, well, we have a course, just actually spend time learning. Like, you know, anyway. And there's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit, that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join a Discord and you can hit the like participants or community or whatever button, you can see who's in it. And then you'll see at the top, who the admins or moderators or people in the dev role are. And just DM one of them and say like, oh, here's my GitHub. Well, here's some blog posts I wrote. You know, I'm interested in talking about this, you know, can I join the private channels? And I've never heard of anybody saying no. I will say, you know, Alutha's all pretty open. So you can do the Alutha Discord still. You know, one problem with the Alutha Discord is it's been going on for so long that it's like, it's very inside baseball. It's quite hard to get started. Yeah. Carpa AI looks, I think it's all open. That's just less stability. That's more accessible. [00:52:03]Swyx: Yeah. [00:52:04]Jeremy: There's also just recently, now it's research that does like the Hermes models and data set just opened. They've got some private channels, but it's pretty open, I think. You mentioned Alignment Lab, that one it's all the interesting stuff is on private channels. So just ask. If you know me, ask me, cause I've got admin on that one. There's also, yeah, OS Skunkworks, OS Skunkworks AI is a good Discord, which I think it's open. So yeah, they're all pretty good. [00:52:40]Swyx: I don't want you to leak any, you know, Discords that don't want any publicity, but this is all helpful. [00:52:46]Jeremy: We all want people, like we all want people. [00:52:49]Swyx: We just want people who like, [00:52:51]Jeremy: want to build stuff, rather than people who, and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like, here's a really small kind of task that as somebody who doesn't know anything is going to take you a really long time to do, but it would still be helpful. Then, and then you go and do it. That would be great. The truth is, yeah, [00:53:19]Swyx: like, I don't know, [00:53:20]Jeremy: maybe 5% of people who come in with great enthusiasm and saying that they want to learn and they'll do anything. [00:53:25]Swyx: And then somebody says like, [00:53:25]Jeremy: okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity. And everybody will then want to help you do more work. [00:53:41]Swyx: So yeah. [00:53:41]Jeremy: So just, yeah, just do work and people will want to support you. [00:53:47]Alessio: Our Discord used to be referral only for a long time. We didn't have a public invite and then we opened it and they're kind of like channel gating. Yeah. A lot of people just want to do, I remember it used to be like, you know, a forum moderator. [00:54:00]Swyx: It's like people just want to do [00:54:01]Alessio: like drive-by posting, [00:54:03]Swyx: you know, and like, [00:54:03]Alessio: they don't want to help the community. They just want to get their question answered. [00:54:07]Jeremy: I mean, the funny thing is our forum community does not have any of that garbage. You know, there's something specific about the low latency thing where people like expect an instant answer. And yeah, we're all somehow in a forum thread where they know it's like there forever. People are a bit more thoughtful, but then the forums are less active than they used to be because Discord has got more popular, you know? So it's all a bit of a compromise, you know, running a healthy community is, yeah, it's always a bit of a challenge. All right, we got so many more things [00:54:47]Alessio: we want to dive in, but I don't want to keep you here for hours. [00:54:50]Swyx: This is not the Lex Friedman podcast [00:54:52]Alessio: we always like to say. One topic I would love to maybe chat a bit about is Mojo, modular, you know, CrystalLiner, not many of you on the podcast. So we want to spend a little time there. You recently did a hacker's guide to language models and you ran through everything from quantized model to like smaller models, larger models, and all of that. But obviously modular is taking its own approach. Yeah, what got you excited? I know you and Chris have been talking about this for like years and a lot of the ideas you had, so. [00:55:23]Jeremy: Yeah, yeah, yeah, yeah, no, absolutely. So I met Chris, I think it was at the first TensorFlow Dev Summit. And I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know, you know, certainly nothing had been mentioned. So I, you know, I admired him from afar with LLVM and Swift and whatever. And so I saw him walk into the courtyard at Google. It's just like, oh s**t, man, that's Chris Latner. I wonder if he would lower his standards enough to talk to me. Well, worth a try. So I caught up my courage because like nobody was talking to him. He looked a bit lost and I wandered over and it's like, oh, you're Chris Latner, right? It's like, what are you doing here? What are you doing here? And I was like, yeah, yeah, yeah. It's like, oh, I'm Jeremy Howard. It's like, oh, do you do some of this AI stuff? And I was like, yeah, yeah, I like this AI stuff. Are you doing AI stuff? It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool. And it's like, wow. So like, I spent the next half hour just basically brain dumping all the ways in which AI was stupid to him. And he listened patiently. And I thought he probably wasn't even remember or care or whatever. But yeah, then I kind of like, I guess I re-caught up with him a few months later. And it's like, I've been thinking about everything you said in that conversation. And he like narrated back his response to every part of it, projects he was planning to do. And it's just like, oh, this dude follows up. Holy s**t. And I was like, wow, okay. And he was like, yeah, so we're going to create this new thing called Swift for TensorFlow. And it's going to be like, it's going to be a compiler with auto differentiation built in. And blah, blah, blah. And I was like, why would that help? [00:57:10]Swyx: You know, why would you? [00:57:10]Jeremy: And he was like, okay, with a compiler during the forward pass, you don't have to worry about saving context, you know, because a lot will be optimized in the backward. But I was like, oh my God. Because I didn't really know much about compilers. You know, I spent enough to kind of like, understand the ideas, but it hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. I was like, wow, that's amazing. Okay, you do know, right, that nobody's going to use this unless it's like usable. It's like, yeah, I know, right. So I was thinking you should create like a fast AI for this. So, okay, but I don't even know Swift. And he was like, well, why don't you start learning it? And if you have any questions, ask me. It's just like, holy s**t. Like, not only has Chris Latner lowered his standards enough to talk to me, but he's offering me personal tutoring on the programming language that he made. So I was just like, I'm not g
Co-founder and CEO Colin Schilling, CCO Eric Phillips and EVP of marketing Rachel Thomas join the Brewbound team to discuss the cidery's participation in National Cider Month, as well as its debut at the Great American Beer Festival and a retail activation with Whole Foods. Plus, Zoe, Justin and Jess break down recent headlines – including Molson Coors' Happy Thursday hard refresher, the Beer Institute's annual meeting and Duvel USA's 2024 innovation slate – and sample hop-infused chocolate.
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | FacebookManisha: LinkedIn | X | Instagram | FacebookLisa Betts-LaCroix: Website | LinkedIn | Super Power U Podcast | FacebookNir Eyal: Website | LinkedIn | X | Facebook | Nir and Far PodcastKerry McDonald: LiberatED Podcast | Website | X | Instagram | FacebookShiren Rattigan: Colossal Academy | LinkedIN | Instagram | XNaval Ravikant: Airchat | LinkedIn | X | Podcast | YouTubeRachel Thomas: Fast.ai | LinkedIn | X | FacebookAlycia Wright: Cultural Roots Homeschool Co-op| LinkedIn | Instagram | FacebookJoin our premium community with expert support and adviceJoin the Conversation on AirchatHomeschooling Room: https://getairchat.com/manisharose/homeschoolingRelated Teach Your Kids Podcast EpisodesHomeschooling with Naval Ravikant, Nir Eyal, Rachel Thomas, Kerry McDonald, Alycia Wright, Shiren Rattigan, and Lisa Betts-Lacroix: Part 1But what about socialization?Raising Indistractable Kids: Nir Eyal's Unconventional Approach to HomeschoolingMicroschooling with Iman Alleyne & Shiren RattiganThe Future of Educational Choice: Kerry McDonald Unpacks it AllTeach Your Kids: Game-Based Learning: The Prodigy Approach with Rohan Mahimker Blog PostsSo, what's the big deal about "Mastery Learning"?
Hugo speaks with Chris Wiggins (Columbia, NYTimes) and Matthew Jones (Princeton) about their recent book How Data Happened, and the Columbia course it expands upon, data: past, present, and future. Chris is an associate professor of applied mathematics at Columbia University and the New York Times' chief data scientist, and Matthew is a professor of history at Princeton University and former Guggenheim Fellow. From facial recognition to automated decision systems that inform who gets loans and who receives bail, we all now move through a world determined by data-empowered algorithms. These technologies didn't just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search. DJ Patil, former U.S. Chief Data Scientist, said of the book "This is the first comprehensive look at the history of data and how power has played a critical role in shaping the history. It's a must read for any data scientist about how we got here and what we need to do to ensure that data works for everyone." If you're a data scientist, machine learning engineer, or work with data in any way, it's increasingly important to know more about the history and future of the work that you do and understand how your work impacts society and the world. Among other things, they'll delve into * the history of human use of data; * how data are used to reveal insight and support decisions; * how data and data-powered algorithms shape, constrain, and manipulate our commercial, civic, and personal transactions and experiences; and * how exploration and analysis of data have become part of our logic and rhetoric of communication and persuasion. You can also sign up for our next livestreamed podcast recording here (https://www.eventbrite.com/e/data-science-past-present-and-future-tickets-695643357007?aff=kjvg)! LINKS How Data Happened, the book! (https://wwnorton.com/books/how-data-happened) data: past, present, and future, the course (https://data-ppf.github.io/) Race After Technology, by Ruha Benjamin (https://www.ruhabenjamin.com/race-after-technology) The problem with metrics is a big problem for AI by Rachel Thomas (https://www.ruhabenjamin.com/race-after-technology) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
SELECTED LINKS FROM THE EPISODETeach Your Kids: Website | LinkedIn | X | Instagram | Substack | FacebookManisha: LinkedIn | X | Instagram | FacebookLisa Betts-LaCroix: Website | LinkedIn | Super Power U Podcast | FacebookNir Eyal: Website | LinkedIn | X | Facebook | Nir and Far PodcastKerry McDonald: LiberatED Podcast | Website | X | Instagram | FacebookShiren Rattigan: Colossal Academy | LinkedIN | Instagram | XNaval Ravikant: Airchat | LinkedIn | X | Podcast | YouTubeRachel Thomas: Fast.ai | LinkedIn | X | FacebookAlycia Wright: Cultural Roots Homeschool Co-op| LinkedIn | Instagram | FacebookJoin our premium community with expert support and adviceJoin the Conversation on AirchatHomeschooling Room: https://getairchat.com/manisharose/homeschoolingRelated Teach Your Kids Podcast EpisodesBut what about socialization?Raising Indistractable Kids: Nir Eyal's Unconventional Approach to HomeschoolingMicroschooling with Iman Alleyne & Shiren RattiganThe Future of Educational Choice: Kerry McDonald Unpacks it AllTeach Your Kids: Game-Based Learning: The Prodigy Approach with Rohan Mahimker Blog PostsSo, what's the big deal about "Mastery Learning"?
In this episode, we talk with Rachel Thomas who is a personal survivor of human trafficking and an example of how anyone can become a victim. Rachel tells her story of how a successful, “girl next door” can become manipulated into the abusive, tragic and emotionally devastating world of human trafficking. Rachel provides 5 major takeaways to parents, adolescents and young adults to help us become aware of the dangers. From social media precautions, identifying warning signs and how to effectively communicate the issues, Rachel gives an inside look into how to prevent others from becoming victims, as well as hope that there are solutions. This is a must watch/listen, as anyone can become a victim and the epidemic of human trafficking is very very real.Get Ready To Be Inspired, Educated, Empowered and Entertained! For more information visit us @shesa10times5. https://instagram.com/shesa10times5
‘Dear Earth' is the show at the Hayward Gallery on London's south Bank that represents a coming together of 15 global artists who are responding to the crisis our planet is facing. We talk to Rachel Thomas, the chief curator and two of the artists exhibiting there, Ackroyd & Harvey. Ackroyd & Harvey have contributed a series of portraits of environmental activists made from seedling grass. Rachel tells us about the other exhibits there, including the moving and enchanting film ‘The Future: Sixes and Sevens' by Cornelia Parker, depicting small children talking about their fears and hopes. Other works include photographs and film of the devastated Kichwa Territory in Peru by Richard Mosse, John Gerrard's ‘Surrender', a digital installation of a flag which heralds visitors into the show, Jenny Kendler's large scale sculpture of birds' eyes – many of the birds are in danger of extinction or already extinct - and the five-metre-high ‘Living Pyramid' at the show's heart by 93-year-old Agnes Denes. We also hear about the Hayward's beautiful roof garden created by Grounded Ecotherapy, set up to help recovering addicts, alcoholics and people with mental health problems. The garden was commissioned 11 years ago and now contains 250 species of wild indigenous plant – more than any other roof terrace in the world. It's a devastating but beautiful exhibition, conceived to convey hope, start conversations and explore solutions via the artists' lens.
Rachel Thomas is a local Jeweler that owns RLT Jewelry. Not only does she create custom jewelry, but she also specializes in some amazing keepsake pieces made from things like ashes of a loved one, flowers from things such as a wedding or funeral, breast milk and SO much more! Follow + Support on your fav platform: itbpodcast.com Check out our guest at: rltjewelry.com --- Support this podcast: https://podcasters.spotify.com/pod/show/ootboxmedia/support
Ellevate Podcast: Conversations With Women Changing the Face of Business
This week, we sit down with Rachel Thomas, Co-founder and CEO of LeanIn.Org and OptionB.Org, and Alexis Krivkovich, Senior Partner at McKinsey & Company, to discuss the 2022 edition of McKinsey and LeanIn.Org's Women in the Workplace report, including why women leave the workforce, how to retain better representation and leadership, and the ups and downs of hybrid work and flexibility.
Ellevate Podcast: Conversations With Women Changing the Face of Business
This week, we sit down with Rachel Thomas and Alexis Krivkovich to discuss the 2022 edition of McKinsey and LeanIn.Org's Women in the Workplace report, including why women leave the workforce, how to retain better representation and leadership, and the ups and downs of hybrid work and flexibility.
Traumas are the things that hold us accountable for who we choose to be. We can allow them to take over our lives or we can deal with them head on and use them as a life lesson. Rachel is the CEO and Founder of Exercise Your Soul, Transformation Coach, Speaker, and a Mother. Rachel found herself as a Relationship Expert by providing for her clients by helping them face their true emotions by confronting them head on and accepting them. Rachel has a large variety of experience in her field, but what Rachel addresses with Jeff is the biggest question we all ask ourselves. Why is it easier to help other people going through a situation than it is to help ourselves? The pressure of trauma. Rachel talks about what made her realize why her first marriage didn't work. It wasn't her husband, it was herself not being clear about her emotions. The biggest misconception is we tell ourselves we are fine, happy, nothing is wrong. It's about digging up what's really bothering you. You can't love others without loving yourself. What You'll Learn: Why are we wired for thriving The unique joy Rachel experienced as a educator Why it is good to recognize the skill traits you have The importance of facing your traumas What makes us afraid of discomfort The bitter truth of loving someone who doesn't love themselves What makes Rachel passionate about helping other people find themselves in their emotions Favorite Quote: “It's that energy of like yes I am embracing life to the fullest and it doesn't matter what the context is, it doesn't matter what the content is. I am showing up as my authentic self with what is true for myself in this moment waiting, waiting like I'm doing it. I am doing this life thing.” Rachel Sartori-Thomas Connect With Rachel: Instagram Website Facebook How To Get Involved: Addicted to Winning connects listeners through stories that prove why mindset matters. Jeff Brekken was working a 60-hour week on his family's farm by the time he was 10. When he wasn't helping with the harvest, he was at the hockey rink. Jeff's early life experiences taught him he needs to be either all-in or all-out.In 2000 he started a single-family home construction business and by 2008 he was looking for the next big thing. Jeff founded the Blue SKy Benefit Solutions & Rise Above HR/ Recruiting on top of a lifelong passion for helping people. Through this show, Jeff will take that passion one step further. If you enjoyed this episode, head over and visit us on Apple Podcasts - leave a review and let us know what you thought! Your feedback keeps us going. Thanks for helping.
A leader in the anti-human trafficking movement, Carissa Phelps is a highly respected and impressive advocate to protect young people from pimps and traffickers. She has an MBA and JD from UCLA, had a documentary, Carissa, and a book, Runaway Girl: Escaping Life on the Streets One Helping Hand at a Time, and a company to help train survivors on how to educate and help victims. Carissa invited me to a law enforcement training and the first curriculum for survivors, championed by Rachel Thomas called Ending The Game was developed. If you care about helping to expose high-level traffickers, you must listen to this interview and understand the problem and what is being done. Learn more about your ad choices. Visit megaphone.fm/adchoices
Technology Integration Specialist, Steven Lamb, shares how to enhance learning in all content areas through technological innovations and imagination. Steven has been recognized as a PBS Digital Innovator, Apple Distinguished Educator, the Henry Ford Innovative Teacher, and an ISTE Making IT Happen award recipient. He and his wife Rachel Thomas, started a virtual team teaching movement which they shared at the 2016 and 2017 TEDxABQ Education events. Listen and be inspired! Resources: Connect to Infini-D Learning to inquire about a pilot Steven Lamb and Rachel Thomas' website on Virtual Team Teaching Reach out to Steven at steven.lamb@isdenver.org International School of Denver Steven Lamb and Rachel Thomas at TEDxABQ Education Tristan Harris and the Center For Humane Technology Foundations of Humane Technology: An Online Course For Professionals Shaping Tomorrow's Technology Find The Social Dilemma on Netflix EdCuration's Certified EdTrustees Micro Professional Learning ExPLorations EdCuration's Blog: Learning in Action EdCuration's upcoming Online Events
Have you every wondered about what goes on behind the scenes of Plus? Find out in this special guest episode! We are very pleased to be collaborating with the wonderful Isaac Newton Institute for Mathematical Sciences (INI) in Cambridge. Recently Plus editors Marianne Freiberger and Rachel Thomas appeared on the INI's Living Proof podcast, talking to the INI's communication's manager Dan Aspel. We talked to Dan about mathematical journalism, spreading a love of numbers, and our new collaboration with the INI. Topics touched upon include our late boss, the wonderful John Barrow, the many joys of being a maths communicator, and the thrill that comes from finding and inspiring audiences with the most unusual of subjects. Thank you to Dan and the INI for allowing us to host this episode of Living Proof on our podcast. You can find all the content from our collaboration with the INI here. 00:00 – Introduction 00:47 – Welcome 01:30 – A little background about Marianne 04:05 – A little background about Rachel 07:12 – A tribute to John Barrow 08:36 – Choosing communication over research 11:40 – Who is the average +Plus reader? 13:25 – The appeal of +Plus 17:05 – “Maths and hallucinations” (an article with “quite interesting comments”) 22:05 – Collaborating with INI 30:32 – Plans for the future 32:45 – Terrible coffee… but good conversation
Pastor Klarc and Rachel Thomas breakdown Acts 7 and discuss the beauty and necessity of grace and truth and standing for what is right even when it is difficult!
We talk with one of our prenatal case managers, Rachel Thomas, to learn more about Black Maternal Health Week, doula care, Centering, and more!
Do you have an upcoming leadership transition in your school or district? Yes? Then this episode is for you! Rachel has practical tips that you're probably going to want to write down.IN THIS EPISODE, RACHEL SHARES:Keys to tying social media posts to your vision, mission, and values before you ever identify a need to transition to a new leaderTips for the search process and involving your communityA list of ways to connect and engage with the community as the new leader takes over using social mediaHer best social media tip of always including a call to action in your postsSPECIAL GUESTRachel ThomasBrand and Communications ManagerKansas Association of School Boards, KansasEmail: rthomas@kasb.org Twitter: https://twitter.com/RachelIsaThomas Website: https://www.kasb.org/ Facebook: https://www.facebook.com/KASBPublicEd Twitter: https://twitter.com/KASBPublicEd USEFUL INFORMATIONRachel's Post Entry Plan for Dr. Anthony Lewis - see page 3 for ideas that Rachel spoke about!Nine Tips for Celebrating Seniors on Social MediaFree PDF - 20 Calls to Action to Drive Engagement on Social MediaMORE RESOURCESFree Video Training: Learn the simple secrets behind social media for K12 schools!Sign up for our free e-newsletter - click herewww.SocialSchool4EDU.com
Negotiate or force an unconditional surrender? Some of your views in the weekly mailbag are different when it comes to how to end the situation in Ukraine. And for the first week in two years, no questions or comments about Covid!
Dr. Sandie Morgan and Rachel Thomas discuss a new curriculum developed as a resource for parents, social work agencies, after school programs, and more. The Cool Aunt Series is an online prevention course developed for youth that will guide them through understanding human trafficking and provide resources and support throughout and after the curriculum. Rachel Thomas A graduate of UCLA with a Masters in Education and a personal survivor of human trafficking, Rachel has extensive experience teaching, training, curriculum writing, public speaking and mentoring. As the founder of Sowers Education Group and the lead author of Ending The Game and The Cool Aunt, she has educated and inspired a wide range of audiences including teens, social service providers, churches, teachers, college students, and law enforcement. Sowers' intervention curriculum Ending The Game is being used by over 1,000 facilitators in 36 states and helps survivors break the bonds of attachment to traffickers and the lifestyle of commercial sexual exploitation. Since 2012, Rachel and the Sowers Team have reached over 150,000 live audience members and millions more through numerous media outlets including New Day Morning Show on CNN, Inside with Chris Cuomo: Anyone's Daughter on HLN, The T.D. Jakes Show, The New York Times Upfront Magazine and ABC's Newsmakers. Rachel was not only honored by Congressman Ed Royce of California's 39th district and Los Angeles Supervisor Don Knabe for her leadership and trafficking prevention efforts, but was also nominated and appointed to the White House Advisory Council on Human Trafficking for 2020-2022 term. Key Points They are moving beyond awareness and focusing on education so youth fully understand what human trafficking is, who it happens, who perpetrates it, why it happens, and the resources available. The Cool Aunt series is developed to be a resource for parents, after-school programs, social service agencies, etc. that can be done online. S.T.R.E.A.M.S. of Influence S - Survival T - Trafficker R - Recruiter E - Environment A - Abuse M - Media S - Solicitation Resources EP. 196 - Rachel Thomas: Ending the Game Sowers Education Group Ending The Game The Cool Aunt Series Ensure Justice Conference Love the show? Consider supporting us on Patreon! Become a Patron Transcript Dave [00:00:00] You're listening to the Ending Human Trafficking podcast. This is episode number 272, The Cool Aunt, with Rachel Thomas. Production Credits [00:00:09] Produced by Innovate Learning, maximizing human potential. Dave [00:00:28] Welcome to the Ending Human Trafficking podcast. My name is Dave Stachowiak. Sandie [00:00:33] And my name is Sandie Morgan. Dave [00:00:36] And this is the show where we empower you to study the issues, be a voice, and make a difference in ending human trafficking. Sandie and I are so glad to have Rachel Thomas back on the show today. If you didn't listen to our prior conversation with her, I'm so glad to introduce Rachel to you. She is a graduate of UCLA with a master's in education and a personal survivor of human trafficking. Rachel has extensive experience teaching, training, curriculum writing, public speaking, and mentoring. As the founder of Sowers Education Group and the lead author of Ending The Game and The Cool Aunt. She has educated and inspired a wide range of audiences, including teens, social service providers, churches, teachers, college students, and law enforcement. Her intervention curriculum, Ending The Game, is being used by over a thousand facilitators in 36 states and helps survivors break the bonds of attachment to traffickers and the lifestyle of commercial sexual exploitation. Since 2012, Rachel and the Sowers team have reached over 150,
En la década de los 50, crear una máquina pensante parecía cuento de ciencia ficción. Pero John McCarthy decidió hacerlo realidad, y comenzó con un idioma que llamó LISP. Colin Garvey nos cuenta que McCarthy creó el primer lenguaje para la inteligencia artificial (IA). Sam Williams explica que el interés temprano en las máquinas pensantes se extendió desde la academia hacia el mundo empresarial, y que después de que ciertos proyectos no cumplieron sus promesas, llegó un largo invierno en este sector. Ulrich Drepper habla de que los sueños de la inteligencia artificial fueron más allá de lo que el hardware podía lograr en ese momento. Pero el hardware se vuelve más poderoso día con día. Chris Nicholson señala que las máquinas actuales tienen suficiente potencia de procesamiento para manejar esa demanda de recursos, de manera que estamos en medio de un resurgimiento revolucionario en la investigación y el desarrollo de esta área. Finalmente, Rachel Thomas habla de otros lenguajes de IA además de LISP, y explica los diferentes tipos de tareas para las que se está preparando a la inteligencia artificial.
If we really want to understand the state of women in the workforce, then having good data is essential. No one does this more effectively than the annual Women in the Workplace report from McKinsey and LeanIn.org. Now in its seventh year, it's the largest annual benchmark on women's progress in Corporate America. Host Laura Zarrow sits down with two of the report's authors, McKinsey's Jess Huang and LeanIn.org's co-founder, Rachel Thomas, to discuss the most recent report and its disturbing implications, especially for women of color. Learn more about the report at womenintheworkplace.com and leanin.org. Originally aired with Host Laura Zarrow on October 14, 2021 on SiriusXM's Business Radio, Channel 132. See acast.com/privacy for privacy and opt-out information.
Special guest is author L. Sydney Fisher who's here to discuss the true story of college students that suffered dearly after playing with an Ouija board. Get her book 'The Devil's Board'. Follow us on Instagram @mysteriousradio Follow us on TikTok mysteriousradioTikTok Follow us on Twitter @mysteriousradio Follow us on Pinterest pinterest.com/mysteriousradio Like us on Facebook Facebook.com/mysteriousradio Visit our website: https://www.mysteriousradio.com Check Out Mysterious Radio! (copy the link to share with your friends and family via text) Now an Amazon #1 Bestseller inspired by TRUE EVENTS. On an American college campus in 1987, three students began playing a seemingly innocent game of contacting the dead. Word spread fast around campus and curiosity grew, expanding the group to more than forty people. Spirits were summoned almost daily, and the dark world's influence began to take its toll as one student fell gravely ill and relationships began to crumble. Months later, the dead would be resurrected, and this time there would be Hell to pay. This is their story... Rachel Thomas was more than happy to leave the haunted house she had lived in for the last decade. She had every reason to be excited about her future as a college freshman entering Riverside Community College. But she had no way of knowing that she would find herself in the terrifying grips of the paranormal again when her new roommate, Josie Norton and her friends began using the Ouija Board. In spite of Rachel's reluctance to join the group's nocturnal ritual of contacting the dead, she finds herself sucked into the drama and a witness to the spirit's malevolent nature as strange phenomena begins happening. Weeks turn into months as Amber Simmons becomes obsessed with the game and her "ghost" friend who assumes the identity of a human being now in the afterlife. As the malevolent spirit continues to control and manipulate Amber, the close-knit friends are terrorized until the Ouija spirit makes one final show of force determined to kill them all! _________________________________________________________________ Note from the author: The Devil's Board is based on true events that happened in the Fall/Spring year of 1987-88. The college campus is located in small town, USA. To this day, the story of Ryan Banks still remains a haunting mystery. All names and locations have been changed to protect the privacy of the institution and the characters of the story. Some parts of The Devil's Board have been dramatized for the sake of storytelling. Learn more about your ad choices. Visit megaphone.fm/adchoices