Podcasts about sft

  • 94PODCASTS
  • 185EPISODES
  • 52mAVG DURATION
  • 1WEEKLY EPISODE
  • Dec 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about sft

Latest podcast episodes about sft

The Therapy Show with Lisa Mustard
How Therapists Can Create CE Courses: Free CE Course Builder for Mental Health Clinicians with Lisa Mustard | continuing education | Podcourses | therapist entrepreneurship

The Therapy Show with Lisa Mustard

Play Episode Listen Later Dec 3, 2025 15:10


Sponsored by Berries AI: Use code TherapyShow50 for $50 off your first month - CLICK HERE.    If you are a therapist or counselor looking for continuing education, check out my NBCC Approved $5 Podcourses and other continuing education offerings. Plus, get your first Podcourse half off. In this episode of The Therapy Show, I share something I've been working on behind the scenes - a free tool I created just for mental health clinicians: the CE Course Builder, a custom GPT designed to help you create and launch your own continuing education courses. If you've ever thought about teaching but felt overwhelmed by the tech, compliance, or where to even start, this tool walks you through it all - step-by-step.  I also talk about group discounts available for practice owners (email me to discuss offering my CE Podcourses to your clinicians) and invite you to fill out a short survey to help shape future CE content. If you're ready to move beyond the therapy room and share your expertise, this episode is for you. Get my Coping with Political Stress Ebook and Peaceful Politics AI Guide  Therapist Conversation Framework: Politics in Session A printable PDF with 97 questions to navigate political talk in therapy - without taking sides. Solution-Focused Therapy Guide72 questions + prompts to help adult clients clarify goals and move forward using SFT. Check out all my Counselor Resources. 

The Wing Life Podcast
Episode #117 - Justin Chait

The Wing Life Podcast

Play Episode Listen Later Dec 3, 2025 38:23


This episode is brought to you by Villa Carina Apartments in beautiful Bonaire. In this episode, we sit down with the newly crowned 2025 E-Foil Surf Foil World Tour (SFT) World Champion – the undefeated e-foil racer who took the title in the season finale in Abu Dhabi.Fresh off dominating the inaugural SFT season, the Florida-based ripper (and Flightboard early adopter) joins us to break down what it actually feels like to turn a five-year hunch into a world championship, how e-foil racing went from “nice idea” to a full-blown global tour in record time, and why this sport is exploding faster than anyone predicted.We go deep on:- From kite-smash accidents to building one of the first e-foil schools in South Florida  - The wild Atlanta Foil Fest Enduro with Brian Grubb, Nick Leeson, and 20 riders dodging submerged trees at full throttle  - Unsanctioned full-send dawn patrols through Amsterdam's canals (don't try this at home)  - Gear geek-out: custom shims, chopped tails, 900 Flow vs 707 Flux wings, aftermarket race props, and why everything is still basically stock… for now  - Why full-face helmets and downhill MTB armor are becoming mandatory at 33–35 mph  - Mental warfare on the beach, prop-wash tactics, hot launches, and pulling 3+G turns  - Traveling the world with boards but no batteries (and how the Flightboard rental network saves the day)  - The massive progression from the first dealer races in 2022 to riders now training full-time and closing the gap second by second  - Where e-foil racing is headed: open-ocean courses, city canal sprints, Everglades gator-chasing, and boards that will eventually hit 50 mph  Year one of the Surf Foil World Tour is in the books, prize money is real, brands are paying attention, and the level is skyrocketing. The champ gives us the unfiltered look at what it took to stay on top — and why 2026 is about to get even crazier.If you've ever wondered what the cutting edge of foiling actually looks, sounds, and feels like… this is it.Follow the Surf Foil Tour → https://www.surffoilworldtour.com Justin Chait → https://www.instagram.com/_justinchait_/

The Wing Life Podcast
Surf Foil World Tour (SFT) Show #5: Recap of Abu Dhabi

The Wing Life Podcast

Play Episode Listen Later Nov 26, 2025 27:21


This episode is brought to you by Villa Carina Apartments in beautiful Bonaire. In this episode, we catch up with Tom Hartmann – tour manager of the GKA Kite World Tour, Wingfoil World Tour, and founder of the brand-new Surf Foil World Tour (SFT) – fresh from the biggest water sports spectacle the Middle East has ever seen in Abu Dhabi and now chasing the final Kite World Tour stops in Brazil.Tom takes us behind the scenes of the massive nine-day Abu Dhabi event on the soon-to-be “Miami Beach of the Gulf” (Fahid Island), where kite big air, wingfoil racing, e-foil, and wakefoil all shared the spotlight, 2,500+ spectators showed up on weekends, and the whole thing was broadcast live on TV across the region. With €10,000 prize money per SFT discipline, perfect glassy morning conditions, and a level of organization that left athletes speechless, this was the perfect season finale for the inaugural Surf Foil World Tour.- Abu Dhabi deep dive – why foiling (e-foil, wakefoil, wing, kite, surf, pump) is exploding in the Gulf and how the event showcased every flavor of the sport.- E-foil racing at the highest level yet – Justin Chait remains undefeated in 2025, Agnes takes the women's division, and we talk 3G corners, wingtip-out carving, and why technical skill still beats raw speed.- Wakefoil's breakout moment – first fully independent SFT wakefoil comp, drone + boat broadcasting magic, and why wakefoiling could be the most spectator-friendly foiling discipline out there.- The massive growth nobody saw coming – from a hopeful start to nine events worldwide in year one, with a 2026 calendar dropping in the next couple weeks.- What's next for SFT in 2026 – more surf foil, downwind, wakefoil, the return of the epic indoor Düsseldorf pump & wing event, and a brand-new Foil Assist discipline that mixes propulsion take-offs with pure pumping sections.- Plus Tom's love for açaí bowls at Brazilian sunset and maybe sneaking in some surf trips to Nicaragua or Costa Rica before heading home.Year one of the Surf Foil World Tour is officially in the books and it's safe to say foiling just went global – big budgets, big crowds, and bigger stoke. Here's to 2026 being even wilder.Follow the Surf Foil Tour → https://www.surffoiltour.com 

The MAD Podcast with Matt Turck
Open Source AI Strikes Back — Inside Ai2's OLMo 3 ‘Thinking"

The MAD Podcast with Matt Turck

Play Episode Listen Later Nov 20, 2025 88:10


In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from Ai2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It's an unusually transparent look into the inner machinery of a modern frontier-class model.Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you've been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you'll hear.We widen the lens to the global picture: why Meta's retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.Allen Institute for AI (AI2)Website - https://allenai.orgX/Twitter - https://x.com/allen_aiNathan LambertBlog - https://www.interconnects.aiLinkedIn - https://www.linkedin.com/in/natolambert/X/Twitter - https://x.com/natolambertLuca SoldainiBlog - https://soldaini.netLinkedIn - https://www.linkedin.com/in/soldni/X/Twitter - https://x.com/soldniFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold Open(00:39) – Welcome & today's big announcement(01:18) – Introducing the Olmo 3 model family(02:07) – What “base models” really are (and why they matter)(05:51) – Dolma 3: the data behind Olmo 3(08:06) – Performance vs Qwen, Gemma, DeepSeek(10:28) – What true open source means (and why it's rare)(12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything(16:37) – Why Qwen is everywhere (including U.S. startups)(18:31) – Why Chinese labs go open source (and why U.S. labs don't)(20:28) – Inside ATOM: the U.S. response to China's model surge(22:13) – The rise of “thinking models” and inference-time scaling(35:58) – The full Olmo pipeline, explained simply(46:52) – Pre-training: data, scale, and avoiding catastrophic spikes(50:27) – Mid-training (tail patching) and avoiding test leakage(52:06) – Why long-context training matters(55:28) – SFT: building the foundation for reasoning(1:04:53) – Preference tuning & why DPO still works(1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain(1:13:59) – Why RL is so technically brutal(1:18:17) – Complexity tax vs AGI hype(1:21:58) – How everyone can contribute to the future of AI(1:27:26) – Closing thoughts

The Therapy Show with Lisa Mustard
Turning Podcasts into Continuing Education Courses: Dr. Tobin Richardson's Journey with Save the Therapist | therapist entrepreneurship | affordable CEUs for therapists | professional development

The Therapy Show with Lisa Mustard

Play Episode Listen Later Nov 12, 2025 31:15


Sponsored by Berries: Use code TherapyShow50 for $50 off your first month - CLICK HERE.   If you are a therapist or counselor looking for continuing education, check out my NBCC Approved $5 Podcourses and other continuing education offerings. Plus, get your first Podcourse half off. In this episode of The Therapy Show, I'm thrilled to chat with Dr. Tobin Richardson, a fellow continuing education creator and the founder of Save the Therapist. We dive into how Tobin combined his passion for narrative podcasting and his background in counselor education to create a unique, story-driven CE platform for therapists. He shares the behind-the-scenes of launching CE courses that actually engage and inspire, why he believes continuing education needs a serious upgrade, and how he's offering these courses completely free thanks to his creative business model. We also get real about the challenges of building something meaningful, how to market CE content without burning out, and what it's like to cover nuanced, sometimes controversial topics with honesty and integrity. If you've ever thought about creating your own CE content or just want to hear how another therapist is innovating in the field, this episode is for you. Tune in and get inspired by Tobin's journey and maybe even find your next favorite CE podcast. Tobin Richardson, EdD, NCC, is a counselor educator with a decade of experience building and delivering innovative educational resources to therapists in both community mental health and large VC-backed provider organizations. Since launching in early 2025, his NPR-style CE platform Save the Therapist has garnered over 4,000 therapists registrants with over 7,000 course completions. And don't forget! If you're ready to spend less time on notes and more time doing what you love, check out heyberries.com. Use code THERAPYSHOW50 for $50 off your first month with Berries.  Get my Coping with Political Stress Ebook and Peaceful Politics AI Guide  Therapist Conversation Framework: Politics in Session A printable PDF with 97 questions to navigate political talk in therapy - without taking sides. Solution-Focused Therapy Guide72 questions + prompts to help adult clients clarify goals and move forward using SFT. Check out all my Counselor Resources. 

The Therapy Show with Lisa Mustard
How Podcourses Are Changing Continuing Education for Therapists with Lisa Mustard | Podcourses | therapist entrepreneurship | burnout recovery

The Therapy Show with Lisa Mustard

Play Episode Listen Later Oct 16, 2025 31:26


Sponsored by Berries: Use code TherapyShow50 for $50 off your first month - CLICK HERE.  If you are a therapist or counselor looking for continuing education, check out my NBCC Approved $5 Podcourses and other continuing education offerings. Plus, get your first Podcourse half off. In this episode, I'm sharing my recent conversation from Between Sessions with Berries, a podcast created for mental health professionals who want to simplify documentation, fight burnout, and reconnect with their purpose. Kym Tolson and I dive into my journey from therapist to continuing-education creator, how burnout inspired me to reimagine CE for busy clinicians, and what it takes to blend creativity, courage, and aligned action into your career. We also discuss lessons learned from losing a major podcast sponsor, building confidence through reinvention, and the power of staying curious in the face of change. If you've ever felt stuck, uninspired, or ready for something new in your professional life, this conversation will encourage you to take the next step - one aligned action at a time. And don't forget! If you're ready to spend less time on notes and more time doing what you love, check out heyberries.com. Use code THERAPYSHOW50 for $50 off your first month with Berries.  Get my Coping with Political Stress Ebook and Peaceful Politics AI Guide  Therapist Conversation Framework: Politics in Session A printable PDF with 97 questions to navigate political talk in therapy - without taking sides. Solution-Focused Therapy Guide72 questions + prompts to help adult clients clarify goals and move forward using SFT. Check out all my Counselor Resources.   

ICMA Podcast
ICMA Quarterly Briefing, Q4 2025: T+1: EU High Level Roadmap and recommendations and SFTs

ICMA Podcast

Play Episode Listen Later Oct 15, 2025 7:50


Alex Westphal, Senior Director, Market Practice and Regulatory Policy, talks about the latest milestones in Europe's journey to T+1, also looking at SFT specific impacts and discussions which are central to the success of T+1.

The Wing Life Podcast
Surf Foil World Tour (SFT) Show #4: Recap of the Pump Foil World Cup Traunsee 2025

The Wing Life Podcast

Play Episode Listen Later Oct 8, 2025 37:34


This episode is brought to you by Villa Carina Apartments in beautiful Bonaire. In this episode, we sit down with Tom Hartmann and Nico Hopp of Hoppline to dive into the exhilarating world of pump foiling at Lake Traunsee, Upper Austria. Broadcasting from their respective homes, Tom and Nico share their passion for this rapidly growing sport, the vibrant community, and the unique vibe of the SFT.- Lake Traunsee Triumph: Tom and Nico recap the SFT event at Lake Traunsee, a stunning venue surrounded by mountains with a top-notch setup. With four starting docks and a professional organization running alongside the Austrian Wing Foil Championships, the event offered a perfect mix of competition and community, capped off with exciting wake foiling sessions behind a boat.- Pump Foiling's Appeal: Tom and Nico discuss the sport's accessibility, thriving in flatwater lakes and ideal for urban and inland locations. They highlight how pump foiling draws in everyone from pros to beginners. - Community-Driven Competition: Nico emphasizes the inclusive nature of the SFT, where pros like Eden Fiander and Robert von Roll race alongside amateurs, creating a social and competitive atmosphere. Tom explains the division structure—pro, open, masters, youth, and women's categories—ensuring everyone, from seasoned athletes to first-timers, feels motivated to join. - Gear and Technique Evolution: The duo dives into the latest gear trends, with Nico noting the pros' use of tiny, high-performance wings and unique dock-start techniques. From Eden's strap-based approach to Rob's hands-on style, the diversity in equipment and skills keeps the sport dynamic and exciting. - A Family Affair: Tom highlights the family-friendly vibe, with free dinners for competitors and their families, fostering a welcoming environment. Nico shares a heartwarming story of a young competitor and his mother camping out to participate, showcasing the sport's appeal across generations.- The Future of SFT: Tom reveals plans for the final 2025 event in Abu Dhabi, featuring e-foiling and wake foiling, and a 2026 season kicking off in Düsseldorf. With ambitions to expand prize money and bring events to urban centers like Venice's Grand Canal, the SFT aims to grow pump foiling's global reach.Join us for a lively discussion packed with insights into pump foiling's rise, the thrill of close-knit competition, and the community spirit driving this niche sport forward. From stunning venues to innovative gear, this episode captures the excitement of foiling without wind.Visit: https://www.instagram.com/supfoiltour & https://www.instagram.com/hoppline/

Six-Figure Trucker
EP161: Practical Lessons from the Road with JB Njoroge

Six-Figure Trucker

Play Episode Listen Later Oct 3, 2025 25:41


We're pleased to welcome the seasoned driver, Geoffrey 'JB' Njoroge, to the show for this episode of SFT! Today's conversation features a lot of practical wisdom regarding the various opportunities in trucking as well as the daily execution of the craft. JB also recounts some tense and amusing moments from his life behind the wheel as a driver and trainer. You'll enjoy the wit and wisdom of this guest as we once again dive deep into the world of driveaway. If you're not already subscribed to the show, please do so in order to see and hear our weekly content and so you don't miss guys like JB. In fact, he's going to rejoin the show next week to talk about his home country and foundational passions.Show Notes:John and “JB” share a laugh about the exercise challenges over the road (0:44)JB navigates COVID and other experiences in his trucking journey (4:00)Finding Norton and getting “spoiled” in Driveaway (6:08)Crazy stories and sage advice from the Road (9:57)The importance of planning, logs, and dispatch relations (16:12)JB's stats, certifications, and future plans (21:15)Keep Trucking, JB!. The Six-Figure Trucker is a weekly podcast about driveaway trucking brought to you by Norton Transport. For more information or to subscribe, please visit Six-FigureTrucker.com. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Sustainable Food Trust Podcast
India Hamilton on Jersey's food and farming culture and the impact of events like the Regen Gathering

The Sustainable Food Trust Podcast

Play Episode Listen Later Sep 30, 2025 39:57


At this year's Regen Gathering on the island of Jersey, our CEO, Patrick Holden, had the chance to meet with the event's co-founder, India Hamilton, for the latest episode of the SFT Podcast. Alongside founding Jersey's Regen Gathering – an annual event which brings together a diverse range of people and ideas to discuss the innovative food, farming and finance approaches that are taking place on Jersey – India is also a chef, food systems expert and heads up HYPHA Consulting, a regenerative consultancy committed to pioneering sustainable futures within the rural economy and food system. In 2018, India was also involved in developing The Sustainable Cooperative (SCOOP), a consumer-led cooperative which aims to create a more sustainable supply of food on Jersey. In this episode, Patrick and India talk about the beginnings of the Jersey Regen Gathering and how its conception was inspired by other food and farming events like Groundswell, what the Jersey government is doing to support their farmers and how this differs from what's happening in the UK, and the connection between public health and our food systems. To connect with India, follow her on LinkedIn. To find out more about the Regen Gathering, visit the website where you can also find details of this year's Jersey Farming Conference, taking place in November. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our monthly newsletter or follow us on Instagram, Facebook, LinkedIn and Bluesky. This conversation was recorded in September 2025.

The top AI news from the past week, every ThursdAI

Wohoo, hey ya'll, Alex here,I'm back from the desert (pic at the end) and what a great feeling it is to be back in the studio to talk about everything that happened in AI! It's been a pretty full week (or two) in AI, with Coding agent space heating up, Grok entering the ring and taking over free tokens, Codex 10xing usage and Anthropic... well, we'll get to Anthropic. Today on the show we had Roger and Bhavesh from Nous Research cover the awesome Hermes 4 release and the new PokerBots benchmark, then we had a returning favorite, Kwindla Hultman Kramer, to talk about the GA of RealTime voice from OpenAI. Plus we got some massive funding news, some drama with model quality on Claude Code, and some very exciting news right here from CoreWeave aquiring OpenPipe!

The Therapy Show with Lisa Mustard
How Free Speech Supports Mental Health: Dr. Chloe Carmichael on “Can I Say That?” | self-expression | cancel culture | self-censorship

The Therapy Show with Lisa Mustard

Play Episode Listen Later Sep 4, 2025 28:37


The Sustainable Food Trust Podcast
Molly Biddell on rewilding at Knepp estate and measuring social impacts

The Sustainable Food Trust Podcast

Play Episode Listen Later Sep 2, 2025 38:47


After both appearing on the Grazing for Good: Livestock and Biodiversity in the UK panel at ORFC earlier this year, SFT CEO, Patrick Holden, sat down once again with Molly Biddell, Head of Natural Capital at Knepp Estate – a 3,500-acre rewilding project in West Sussex – for an episode of the SFT Podcast. Her work involves leveraging nature markets and policy for Knepp, Weald to Waves and the River Adur Landscape Recovery project. She also works part-time at Hampton Estate, a family-run regenerative farming business, facilitates the Upper Adur Farming Cluster group and is a columnist for Farmers Weekly. In this episode, Patrick and Molly talk about the work going on at Knepp Estate – ‘a radical rewilding experiment', says Molly – including the success they've had so far in terms of an increase in biodiversity, carbon sequestration and habitat restoration. They also talk about the role of projects like Knepp Estate to improve public awareness of rewilding and more sustainable agricultural methods, before finishing the episode with a discussion on measuring the climate, nature and social impacts of such projects. To hear more from Molly, you can read her column for Farmers Weekly here. To find out more about Knepp Estate, visit: https://knepp.co.uk. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our monthly newsletter or follow us on Instagram, X, Facebook and Bluesky. This conversation was recorded in May 2025.

The Therapy Show with Lisa Mustard
Love Over Politics: How to Stay Connected When Family Disagrees with Lisa Mustard | Political Stress | Family Conflict | Family Disagreements

The Therapy Show with Lisa Mustard

Play Episode Listen Later Aug 28, 2025 11:58


The Therapy Show with Lisa Mustard
How to Choose Continuing Education That Actually Improves Your Therapy Practice with Lisa Mustard | affordable CEUs for therapists | clinical skills | professional development

The Therapy Show with Lisa Mustard

Play Episode Listen Later Aug 22, 2025 8:57


If you are a therapist or counselor looking for continuing education, check out my NBCC Approved $5 Podcourses and other continuing education offerings. Plus, get your first Podcourse half off.

The Therapy Show with Lisa Mustard
Supporting Survivors of Domestic Violence in Therapy with Catrina Drinning-Davis, LPC-S, CCTP | NBCC approved provider | continuing education | Therapist training

The Therapy Show with Lisa Mustard

Play Episode Listen Later Aug 6, 2025 20:19


If you are a therapist or counselor looking for continuing education, check out my NBCC Approved $5 Podcourses and other continuing education offerings. Plus, get your first Podcourse half off. Check out all my Counselor Resources. 

The Sustainable Food Trust Podcast
Rupert Sheldrake on bridging science and spirituality

The Sustainable Food Trust Podcast

Play Episode Listen Later Aug 5, 2025 54:40


Following their session together at this year's Oxford Real Farming Conference – Land, Food and Spirit – SFT CEO, Patrick Holden, and renowned biologist and author, Rupert Sheldrake, reconnected to record an episode of the SFT Podcast. Rupert's impressive career started at Cambridge University where he studied Natural Sciences, before receiving a scholarship to attend Harvard University, studying History and Philosophy of Science. Rupert later returned to Cambridge where he gained a PhD in Plant Development. This eventually led him to India, where he worked at The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), to develop a more holistic approach to biology and science: “the mechanistic, materialist paradigm was too limiting and constricting” resulting in the idea of ‘morphic resonance'. Rupert has also authored more than 100 technical papers and nine books, including Science and Spiritual Practices. This episode takes a slightly different turn from our usual episodes, with less focus on agriculture, and more on the role of spirituality in science. During this episode, Patrick and Rupert discuss bridging the gap between spirituality and science, they ask whether farms could be considered ‘holy places', Rupert explains his theory of morphic resonance and talks about his involvement with the British Pilgrimage Trust. To find our more about Rupert and his work, visit https://www.sheldrake.org, and follow him on Instagram and YouTube. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our monthly newsletter or follow us on Instagram, X, Facebook and Bluesky. This conversation was recorded in April 2025.     Timestamps: 0.00: Intro 0.55: Welcome Rupert! 1.11: Patrick and Rupert at the Oxford Real Farming Conference (ORFC) 2025 2.20: Rupert's career beginnings 3.59: What is ‘morphic resonance'? 4.53: Is there a connection between morphic resonance and epigenetics? 6.43: Building a bridge between science and spirituality  8.58: The influences of Johann Wolfgang von Goethe and Rudolf Steiner 11.20: Rupert's spiritual journey 17.00: What is a ‘holy place'? 21.59: Choral Evensong and its place at conferences like ORFC 27.56: Rupert's involvement with the British Pilgrimage Trust 32.25: Could farms be considered ‘holy places'? 34.10: Rogation Sunday and patronal festivals 40.21: What's drawing people back – regardless of religion – to holy places and patronal festivals? 43.07: Revaluing the parish and local community 48.36: Saying grace at mealtimes  53.30: Thank you Richard 54.21: Outro  

The Sustainable Food Trust Podcast
Dr Federica on the link between environmental health and nutrition and the importance of improved public food education

The Sustainable Food Trust Podcast

Play Episode Listen Later Jul 1, 2025 82:28


This month we bring you a special edition of the podcast, recorded at London Climate Action Week as part of Extreme Hangout's live podcast series. Our CEO Patrick Holden is joined by Dr Federica Amati, Head Nutritionist at ZOE, with a special guest appearance from Professor Tim Spector, Founder of ZOE, for the first half of the episode. Dr. Federica Amati's career boasts a plethora of academic achievements – alongside her position as Head Nutritionist at ZOE (the science and nutrition research company), Dr Federica also holds a PhD in Clinical Medicine Research, a masters in Public Health and is an Association for Nutrition (AfN) Registered Nutritionist.  She has also authored Recipes for a Better Menopause and the Sunday Times Bestseller, Every Body Should Know This. Her approach focuses on improving overall dietary quality throughout the life course, using food as the best tool to transform health. During their conversation together, Patrick and Dr Federica talk about the importance of reconnecting people with how their food is grown, the current culture of litigation and fear of the wrong kinds of bacteria in our foods and how environmental health and nutrition are intrinsically linked. The final 20 minutes of this episode features a Q&A segment from the audience. This episode was recorded and produced by Extreme Hangout. To find out more about Dr Federica, follow her on Instagram and LinkedIn.  To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our monthly newsletter or follow us on Instagram, X, Facebook and Bluesky. This conversation was recorded in June 2025.

The Wing Life Podcast
Surf Foil World Tour (SFT) Show #2: Recap of Atlanta Foil Fest 2025

The Wing Life Podcast

Play Episode Listen Later Jun 25, 2025 38:46


In this episode we catch up with Tom Hartmann about the Atlanta Foil Fest at Lake Lanier's Olympic Park. Tom dives into the three-day event, featuring E-Foil, Pump Foil, Wake Foil, and Airchair foiling competitions, alongside demos and clinics. From Justin Chait dominant performance to Nick Leason's E-Foil legacy and the innovative Betafoil Enduro, the event united foilers from the US, Europe, and beyond. Tom also teases upcoming Surf World Tour events at Lake Garda and Abu Dhabi, promising bigger competitions and live streams.Episode Highlights:Atlanta Foil Fest's debut as a foiling mecca with a custom-built start dockHigh-level E-Foil and Pump Foil races, plus the return of AirchairMeeting foiling pioneer Nick Leeson and testing Betafoil's massive wingsLake Garda's Foiling Week and Abu Dhabi's massive prize purse on the horizonGrowing the global foiling community through passion and connectionFollow Tom & SFT: @surffoilworldtour on Instagram, Facebook, YouTubeWebsite: surffoilworldtour.com

The Sustainable Food Trust Podcast
Max Jones on the importance of preserving traditional food practices and knowing the story behind our food

The Sustainable Food Trust Podcast

Play Episode Listen Later Jun 3, 2025 42:46


For this episode of the SFT podcast, Max Jones – transhumance guide and traditional foods archivist – visits our CEO, Patrick Holden, on Patrick's farm in Wales.  Alongside his work as a transhumance guide – the practice of moving livestock from one grazing ground to another in accordance with the seasons – Max Jones is also a writer, photographer, educator and founder of Up There The Last, a project which aims to reconnect people with their food and educate them about the traditional food practices of the past, which still exist in some parts of the world today. From rare cheese production in the heights of the Alps, to traditional wild salmon smoking in the republic of Ireland, Max Jones' journey to seek out and learn more about traditional food practices has taken him all over the world and led him to meet the people working hard to preserve these essential practices that are at risk of being left behind and forgotten. In this episode, Max and Patrick talk about the threat to traditional foods including modern technology and health and safety regulations, as well as the presence of an off-the-record 'food counterculture' that exists to protect ancient practices. To find out more about Max, follow him on Instagram, and visit the Up There The Last website and Substack page. You can also read the article that Max wrote for the SFT about the importance of preserving traditional food practices, here: https://sustainablefoodtrust.org/news-views/preserving-the-practices-of-traditional-foods/. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our fortnightly newsletter or follow us on Instagram, X, Facebook and Bluesky. This conversation was recorded in August 2024.

Farm Gate
Wheat from the Chaff: Net zero and Monbiot

Farm Gate

Play Episode Listen Later May 30, 2025 56:42


ffinlo Costain (8point9.com) and Joe Stanley (GWCT Allerton Project) discuss:Net zero reports from The Tony Blair Institute and the AFN Network+UK climate change preparednessUK Government 'retakes' the decision to scrap SFI 2024Anaerobic digestionAnd those reports - by FAI and SFT - that were damned by Monbiot.

The Wing Life Podcast
Surf Foil World Tour (SFT) Show #1 - Introduction to SFT

The Wing Life Podcast

Play Episode Listen Later May 28, 2025 43:14


We're thrilled to kick off our new role as the official podcast of the Surf Foil World Tour (SFT)! In this episode, we sit down with Jørgen Vogt and Tom Hartmann, the masterminds behind the SFT, a groundbreaking world tour for non-wind-powered foiling disciplines. From its roots in the GKA (Kite World Tour) and GWA (Wing Foil World Tour) to embracing pump foiling, surf foiling, and more, Jørgen and Tom share how the SFT is uniting passionate amateurs and pros alike, fostering a vibrant foiling community. This chat dives into the tour's grassroots vibe and big ambitions.In this episode, you'll discover:SFT's Origin Story: How a 2019 Cape Town car chat sparked a new tour, building on Jørgen's GKA success and Tom's windsurf tour experience.Five Core Disciplines: Pump foiling, surf foiling, downwind foiling, e-foiling, and wake foiling—accessible sports for lake lovers, yacht owners, and wave chasers.Grassroots Appeal: Why the SFT's Pro-Am approach welcomes enthusiasts, not just pros, fostering friendships and community at events like Pensacola and Sicily.Global Growth: From Düsseldorf's indoor World Cup to North America's rising demand, with the Atlanta Foil Fast World Cup (June 13-15, 2025) on the horizon.Event Challenges: Navigating financial hurdles and explaining niche foiling to sponsors, unlike the established GKA and GWA tours.Jørgen & Tom's Partnership: A chance meeting at a Mauritius Kite World Cup party led to a dynamic duo driving foiling's future.Community Impact: How SFT events, like Sicily's Water Experience, invite newcomers to try foiling, sparking passion across generations.Check out the SFT at https://www.surffoilworldtour.com/ for event schedules and details. Join the foiling community and catch the Atlanta Foil Fest World Cup vibe!

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later May 13, 2025 61:25


Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they've used, and Bespoke Labs' open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA. The complete show notes for this episode can be found at https://twimlai.com/go/731.

The Sustainable Food Trust Podcast
Nic Renison on her approach to regenerative grazing

The Sustainable Food Trust Podcast

Play Episode Listen Later May 6, 2025 38:40


To coincide with the release of our new report, Grazing Livestock: It's not the cow but the how, the latest guest on the SFT Podcast this month is Nic Renison. Nic is a regenerative farmer based in Cumbria where she farms alongside her husband, Paul (Reno), at Cannerheugh Farm. The daughter of dairy farmers, Nic grew up within the conventional, high production agricultural environment, growing food with little thought of the environment. This all changed in 2012 when Nic and Reno had a 'light bulb' moment after visiting an organic farm in Northumberland, which inspired them to start employing more regenerative farming methods. In 2018, alongside Liz Genever, Nic co-founded Carbon Calling – a conference created for farmers, by farmers, to share ideas and exchange knowledge on all things farming and regenerative agriculture. During the episode Nic and Patrick discuss Nic's early farming influences, her and her husband's journey from conventional to regenerative farming methods and the origins of the Carbon Calling conference, and how it supports the wider farming community. To find out more about Nic and Cannerheugh Farm, follow their journey on Instagram and visit their website here. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our fortnightly newsletter or follow us on Instagram, X or Facebook.

LessWrong Curated Podcast
“Alignment Faking Revisited: Improved Classifiers and Open Source Extensions” by John Hughes, abhayesian, Akbir Khan, Fabien Roger

LessWrong Curated Podcast

Play Episode Listen Later Apr 9, 2025 41:04


In this post, we present a replication and extension of an alignment faking model organism: Replication: We replicate the alignment faking (AF) paper and release our code. Classifier Improvements: We significantly improve the precision and recall of the AF classifier. We release a dataset of ~100 human-labelled examples of AF for which our classifier achieves an AUROC of 0.9 compared to 0.6 from the original classifier. Evaluating More Models: We find Llama family models, other open source models, and GPT-4o do not AF in the prompted-only setting when evaluating using our new classifier (other than a single instance with Llama 3 405B). Extending SFT Experiments: We run supervised fine-tuning (SFT) experiments on Llama (and GPT4o) and find that AF rate increases with scale. We release the fine-tuned models on Huggingface and scripts. Alignment faking on 70B: We find that Llama 70B alignment fakes when both using the system prompt in the [...] ---Outline:(02:43) Method(02:46) Overview of the Alignment Faking Setup(04:22) Our Setup(06:02) Results(06:05) Improving Alignment Faking Classification(10:56) Replication of Prompted Experiments(14:02) Prompted Experiments on More Models(16:35) Extending Supervised Fine-Tuning Experiments to Open-Source Models and GPT-4o(23:13) Next Steps(25:02) Appendix(25:05) Appendix A: Classifying alignment faking(25:17) Criteria in more depth(27:40) False positives example 1 from the old classifier(30:11) False positives example 2 from the old classifier(32:06) False negative example 1 from the old classifier(35:00) False negative example 2 from the old classifier(36:56) Appendix B: Classifier ROC on other models(37:24) Appendix C: User prompt suffix ablation(40:24) Appendix D: Longer training of baseline docs--- First published: April 8th, 2025 Source: https://www.lesswrong.com/posts/Fr4QsQT52RFKHvCAH/alignment-faking-revisited-improved-classifiers-and-open --- Narrated by TYPE III AUDIO. ---Images from the article:

The Sustainable Food Trust Podcast
Richard Higgins on the influence of Sir Albert Howard and why we should be using human manure as fertiliser

The Sustainable Food Trust Podcast

Play Episode Listen Later Apr 1, 2025 33:46


Richard Higgins, chairman and CEO of Good Gardeners International, is our guest on the latest episode of the SFT Podcast. Alongside being CEO of Good Gardeners International (GGI), Richard is also a philosopher, fungi specialist, holistic scientist, and Director of Sustainable Agriculture London. He grew up on a mixed farm in Somerset and studied his National Diploma in Agriculture (NDA) at the Royal Berkshire College of Agriculture on Farm and Grassland Management. He later completed a 10-year postgraduate study of the soil fertility works of Sir Albert Howard while travelling and teaching from China to Hawaii. In this episode, Richard talks to Patrick about Sir Albert Howard's influence on his own career, how agriculture intersects with the work of Good Gardeners International – including the charity's demonstration farm, its innovative composting system and the value of human manure as fertiliser. Visit Good Gardners International here to find out more about their work and follow them on their social media channels @GoodGardenersINTL. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our fortnightly newsletter or follow us on Instagram, X or Facebook.

The Sustainable Food Trust Podcast
Jamie Feilden on the transformational power of farm visits for young people and the value of an educated public

The Sustainable Food Trust Podcast

Play Episode Listen Later Mar 4, 2025 29:33


Joining our CEO, Patrick Holden, for this episode of the podcast is Jamie Feilden, founder of Jamie's Farm. Jamie Feilden founded Jamie's Farm in 2009, a charity which seeks to transform the lives of vulnerable children through farming, food and therapy. 15 years later, Jamie's Farm works with over 2,300 children a year across seven farms, and aims to offer as many children as possible an opportunity to improve their wellbeing, boost engagement and develop key life-skills, whilst spending time on a farm.  In this episode, Jamie shares with Patrick how his experiences as a history teacher in Croydon led to the inception of Jamie's Farm, as well as discussing his recent involvement in the SFT's Beacon Farms Network, and why an educated public is key to achieving positive change across our food and farming systems. Visit Jamie's Farm here to find out more about their work and follow them on their social media channels at @JamiesFarm. To listen to more SFT podcasts, featuring some of the biggest names in regenerative food and farming, head to our main podcast page. And to keep up to date with our news, you can subscribe to our fortnightly newsletter or follow us on Instagram, X or Facebook.

The Effortless Podcast
Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Mar 1, 2025 81:34


In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.Key Topics & Chapter Markers[00:00] Introduction – Why reasoning models matter & today's agenda[05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning[15:45] Open weights, open data, and open-source AI[22:30] Fine-tuning vs. RL – When do you need reinforcement learning?[30:10] S1 and the power of test-time scaling[40:25] Budget forcing – Making AI "think" more efficiently[50:50] RAG vs. SFT – What should startups use?[01:05:30] Active learning – AI asking the right questions[01:15:00] Final thoughts – Where AI reasoning is heading nextResources & Links

SupernaturalChristian
Seek For Truth 2.3.25 |Ancient Destrruction | Acts of peter | Dying testimonies

SupernaturalChristian

Play Episode Listen Later Feb 4, 2025


 Seek For Truth 2.3.25 |Ancient Destrruction | Acts of peter | Dying testimoniesVisit us at www.seekfortruth.org

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver. Today, we're proud to share Loubna's highly anticipated talk (slides here)!Synthetic DataWe called out the Synthetic Data debate at last year's NeurIPS, and no surprise that 2024 was dominated by the rise of synthetic data everywhere:* Apple's Rephrasing the Web, Microsoft's Phi 2-4 and Orca/AgentInstruct, Tencent's Billion Persona dataset, DCLM, and HuggingFace's FineWeb-Edu, and Loubna's own Cosmopedia extended the ideas of synthetic textbook and agent generation to improve raw web scrape dataset quality* This year we also talked to the IDEFICS/OBELICS team at HuggingFace who released WebSight this year, the first work on code-vs-images synthetic data.* We called Llama 3.1 the Synthetic Data Model for its extensive use (and documentation!) of synthetic data in its pipeline, as well as its permissive license. * Nemotron CC and Nemotron-4-340B also made a big splash this year for how they used 20k items of human data to synthesize over 98% of the data used for SFT/PFT.* Cohere introduced Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual Progress observing gains of up to 56.5% improvement in win rates comparing multiple teachers vs the single best teacher model* In post training, AI2's Tülu3 (discussed by Luca in our Open Models talk) and Loubna's Smol Talk were also notable open releases this year.This comes in the face of a lot of scrutiny and criticism, with Scale AI as one of the leading voices publishing AI models collapse when trained on recursively generated data in Nature magazine bringing mainstream concerns to the potential downsides of poor quality syndata:Part of the concerns we highlighted last year on low-background tokens are coming to bear: ChatGPT contaminated data is spiking in every possible metric:But perhaps, if Sakana's AI Scientist pans out this year, we will have mostly-AI AI researchers publishing AI research anyway so do we really care as long as the ideas can be verified to be correct?Smol ModelsMeta surprised many folks this year by not just aggressively updating Llama 3 and adding multimodality, but also adding a new series of “small” 1B and 3B “on device” models this year, even working on quantized numerics collaborations with Qualcomm, Mediatek, and Arm. It is near unbelievable that a 1B model today can qualitatively match a 13B model of last year:and the minimum size to hit a given MMLU bar has come down roughly 10x in the last year. We have been tracking this proxied by Lmsys Elo and inference price:The key reads this year are:* MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases* Apple Intelligence Foundation Language Models* Hymba: A Hybrid-head Architecture for Small Language Models* Loubna's SmolLM and SmolLM2: a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters on the pareto efficiency frontier.* and Moondream, which we already covered in the 2024 in Vision talkFull Talk on YouTubeplease like and subscribe!Timestamps* [00:00:05] Loubna Intro* [00:00:33] The Rise of Synthetic Data Everywhere* [00:02:57] Model Collapse* [00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks* [00:12:36] DCLM, Nemotron-CC* [00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage* [00:16:17] Smol Models* [00:18:24] On Device Models* [00:22:45] Smol Vision Models* [00:25:14] What's NextTranscript2024 in Synthetic Data and Smol Models[00:00:00] ​[00:00:05] Loubna Intro[00:00:05] Speaker: ​I'm very happy to be here. Thank you for the invitation. So I'm going to be talking about synthetic data in 2024. And then I'm going to be talking about small on device models. So I think the most interesting thing about synthetic data this year is that like now we have it everywhere in the large language models pipeline.[00:00:33] The Rise of Synthetic Data Everywhere[00:00:33] Speaker: I think initially, synthetic data was mainly used just for post training, because naturally that's the part where we needed human annotators. And then after that, we realized that we don't really have good benchmarks to [00:01:00] measure if models follow instructions well, if they are creative enough, or if they are chatty enough, so we also started using LLMs as judges.[00:01:08] Speaker: Thank you. And I think this year and towards the end of last year, we also went to the pre training parts and we started generating synthetic data for pre training to kind of replace some parts of the web. And the motivation behind that is that you have a lot of control over synthetic data. You can control your prompt and basically also the kind of data that you generate.[00:01:28] Speaker: So instead of just trying to filter the web, you could try to get the LLM to generate what you think the best web pages could look like and then train your models on that. So this is how we went from not having synthetic data at all in the LLM pipeline to having it everywhere. And so the cool thing is like today you can train an LLM with like an entirely synthetic pipeline.[00:01:49] Speaker: For example, you can use our Cosmopedia datasets and you can train a 1B model on like 150 billion tokens that are 100 percent synthetic. And those are also of good quality. And then you can [00:02:00] instruction tune the model on a synthetic SFT dataset. You can also do DPO on a synthetic dataset. And then to evaluate if the model is good, you can use.[00:02:07] Speaker: A benchmark that uses LLMs as a judge, for example, MTBench or AlpacaEvil. So I think this is like a really mind blowing because like just a few years ago, we wouldn't think this is possible. And I think there's a lot of concerns about model collapse, and I'm going to talk about that later. But we'll see that like, if we use synthetic data properly and we curate it carefully, that shouldn't happen.[00:02:29] Speaker: And the reason synthetic data is very popular right now is that we have really strong models, both open and closed. It is really cheap and fast to use compared to human annotations, which cost a lot and take a lot of time. And also for open models right now, we have some really good inference frameworks.[00:02:47] Speaker: So if you have enough GPUs, it's really easy to spawn these GPUs and generate like a lot of synthetic data. Some examples are VLM, TGI, and TensorRT.[00:02:57] Model Collapse[00:02:57] Speaker: Now let's talk about the elephant in the room, model [00:03:00] collapse. Is this the end? If you look at the media and all of like, for example, some papers in nature, it's really scary because there's a lot of synthetic data out there in the web.[00:03:09] Speaker: And naturally we train on the web. So we're going to be training a lot of synthetic data. And if model collapse is going to happen, we should really try to take that seriously. And the other issue is that, as I said, we think, a lot of people think the web is polluted because there's a lot of synthetic data.[00:03:24] Speaker: And for example, when we're building fine web datasets here at Guillerm and Hinek, we're interested in like, how much synthetic data is there in the web? So there isn't really a method to properly measure the amount of synthetic data or to save a webpage synthetic or not. But one thing we can do is to try to look for like proxy words, for example, expressions like as a large language model or words like delve that we know are actually generated by chat GPT.[00:03:49] Speaker: We could try to measure the amount of these words in our data system and compare them to the previous years. For example, here, we measured like a, these words ratio in different dumps of common crawl. [00:04:00] And we can see that like the ratio really increased after chat GPT's release. So if we were to say that synthetic data amount didn't change, you would expect this ratio to stay constant, which is not the case.[00:04:11] Speaker: So there's a lot of synthetic data probably on the web, but does this really make models worse? So what we did is we trained different models on these different dumps. And we then computed their performance on popular, like, NLP benchmarks, and then we computed the aggregated score. And surprisingly, you can see that the latest DOMs are actually even better than the DOMs that are before.[00:04:31] Speaker: So if there's some synthetic data there, at least it did not make the model's worse. Yeah, which is really encouraging. So personally, I wouldn't say the web is positive with Synthetic Data. Maybe it's even making it more rich. And the issue with like model collapse is that, for example, those studies, they were done at like a small scale, and you would ask the model to complete, for example, a Wikipedia paragraph, and then you would train it on these new generations, and you would do that every day.[00:04:56] Speaker: iteratively. I think if you do that approach, it's normal to [00:05:00] observe this kind of behavior because the quality is going to be worse because the model is already small. And then if you train it just on its generations, you shouldn't expect it to become better. But what we're really doing here is that we take a model that is very large and we try to distill its knowledge into a model that is smaller.[00:05:14] Phi, FineWeb, Cosmopedia - Synthetic Textbooks[00:05:14] Speaker: And in this way, you can expect to get like a better performance for your small model. And using synthetic data for pre-training has become really popular. After the textbooks are all you need papers where Microsoft basically trained a series of small models on textbooks that were using a large LLM.[00:05:32] Speaker: And then they found that these models were actually better than models that are much larger. So this was really interesting. It was like first of its time, but it was also met with a lot of skepticism, which is a good thing in research. It pushes you to question things because the dataset that they trained on was not public, so people were not really sure if these models are really good or maybe there's just some data contamination.[00:05:55] Speaker: So it was really hard to check if you just have the weights of the models. [00:06:00] And as Hugging Face, because we like open source, we tried to reproduce what they did. So this is our Cosmopedia dataset. We basically tried to follow a similar approach to what they documented in the paper. And we created a synthetic dataset of textbooks and blog posts and stories that had almost 30 billion tokens.[00:06:16] Speaker: And we tried to train some models on that. And we found that like the key ingredient to getting a good data set that is synthetic is trying as much as possible to keep it diverse. Because if you just throw the same prompts as your model, like generate like a textbook about linear algebra, and even if you change the temperature, the textbooks are going to look alike.[00:06:35] Speaker: So there's no way you could scale to like millions of samples. And the way you do that is by creating prompts that have some seeds that make them diverse. In our case, the prompt, we would ask the model to generate a textbook, but make it related to an extract from a webpage. And also we try to frame it within, to stay within topic.[00:06:55] Speaker: For example, here, we put like an extract about cardiovascular bioimaging, [00:07:00] and then we ask the model to generate a textbook related to medicine that is also related to this webpage. And this is a really nice approach because there's so many webpages out there. So you can. Be sure that your generation is not going to be diverse when you change the seed example.[00:07:16] Speaker: One thing that's challenging with this is that you want the seed samples to be related to your topics. So we use like a search tool to try to go all of fine web datasets. And then we also do a lot of experiments with the type of generations we want the model to generate. For example, we ask it for textbooks for middle school students or textbook for college.[00:07:40] Speaker: And we found that like some generation styles help on some specific benchmarks, while others help on other benchmarks. For example, college textbooks are really good for MMLU, while middle school textbooks are good for benchmarks like OpenBookQA and Pico. This is like a sample from like our search tool.[00:07:56] Speaker: For example, you have a top category, which is a topic, and then you have some [00:08:00] subtopics, and then you have the topic hits, which are basically the web pages in fine web does belong to these topics. And here you can see the comparison between Cosmopedia. We had two versions V1 and V2 in blue and red, and you can see the comparison to fine web, and as you can see throughout the training training on Cosmopedia was consistently better.[00:08:20] Speaker: So we managed to get a data set that was actually good to train these models on. It's of course so much smaller than FineWeb, it's only 30 billion tokens, but that's the scale that Microsoft data sets was, so we kind of managed to reproduce a bit what they did. And the data set is public, so everyone can go there, check if everything is all right.[00:08:38] Speaker: And now this is a recent paper from NVIDIA, Neumatron CC. They took things a bit further, and they generated not a few billion tokens, but 1. 9 trillion tokens, which is huge. And we can see later how they did that. It's more of, like, rephrasing the web. So we can see today that there's, like, some really huge synthetic datasets out there, and they're public, so, [00:09:00] like, you can try to filter them even further if you want to get, like, more high quality corpses.[00:09:04] Speaker: So for this, rephrasing the web this approach was suggested in this paper by Pratyush, where basically in this paper, they take some samples from C4 datasets, and then they use an LLM to rewrite these samples into a better format. For example, they ask an LLM to rewrite the sample into a Wikipedia passage or into a Q& A page.[00:09:25] Speaker: And the interesting thing in this approach is that you can use a model that is Small because it doesn't, rewriting doesn't require knowledge. It's just rewriting a page into a different style. So the model doesn't need to have like knowledge that is like extensive of what is rewriting compared to just asking a model to generate a new textbook and not giving it like ground truth.[00:09:45] Speaker: So here they rewrite some samples from C4 into Q& A, into Wikipedia, and they find that doing this works better than training just on C4. And so what they did in Nemo Trans CC is a similar approach. [00:10:00] They rewrite some pages from Common Crawl for two reasons. One is to, like improve Pages that are low quality, so they rewrite them into, for example, Wikipedia page, so they look better.[00:10:11] Speaker: And another reason is to create more diverse datasets. So they have a dataset that they already heavily filtered, and then they take these pages that are already high quality, and they ask the model to rewrite them in Question and Answer format. into like open ended questions or like multi choice questions.[00:10:27] Speaker: So this way they can reuse the same page multiple times without fearing like having multiple duplicates, because it's the same information, but it's going to be written differently. So I think that's also a really interesting approach for like generating synthetic data just by rephrasing the pages that you already have.[00:10:44] Speaker: There's also this approach called Prox where they try to start from a web page and then they generate a program which finds how to write that page to make it better and less noisy. For example, here you can see that there's some leftover metadata in the web page and you don't necessarily want to keep that for training [00:11:00] your model.[00:11:00] Speaker: So So they train a model that can generate programs that can like normalize and remove lines that are extra. So I think this approach is also interesting, but it's maybe less scalable than the approaches that I presented before. So that was it for like rephrasing and generating new textbooks.[00:11:17] Speaker: Another approach that I think is really good and becoming really popular for using synthetic data for pre training is basically building a better classifiers. For filtering the web for example, here we release the data sets called fine web edu. And the way we built it is by taking Llama3 and asking it to rate the educational content of web pages from zero to five.[00:11:39] Speaker: So for example, if a page is like a really good textbook that could be useful in a school setting, it would get a really high score. And if a page is just like an advertisement or promotional material, it would get a lower score. And then after that, we take these synthetic annotations and we train a classifier on them.[00:11:57] Speaker: It's a classifier like a BERT model. [00:12:00] And then we run this classifier on all of FineWeb, which is a 15 trillion tokens dataset. And then we only keep the pages that have like a score that's higher than 3. So for example, in our case, we went from 15 trillion tokens to 3. to just 1. 5 trillion tokens. Those are really highly educational.[00:12:16] Speaker: And as you can see here, a fine web EDU outperforms all the other public web datasets by a larger margin on a couple of benchmarks here, I show the aggregated score and you can see that this approach is really effective for filtering web datasets to get like better corpuses for training your LLMs.[00:12:36] DCLM, Nemotron-CC[00:12:36] Speaker: Others also try to do this approach. There's, for example, the DCLM datasets where they also train the classifier, but not to detect educational content. Instead, they trained it on OpenHermes dataset, which is a dataset for instruction tuning. And also they explain like IAM5 subreddits, and then they also get really high quality dataset which is like very information dense and can help [00:13:00] you train some really good LLMs.[00:13:01] Speaker: And then Nemotron Common Crawl, they also did this approach, but instead of using one classifier, they used an ensemble of classifiers. So they used, for example, the DCLM classifier, and also classifiers like the ones we used in FineWebEducational, and then they combined these two. Scores into a, with an ensemble method to only retain the best high quality pages, and they get a data set that works even better than the ones we develop.[00:13:25] Speaker: So that was it for like synthetic data for pre-training.[00:13:28] Post Training - AI2 Tulu, Smol Talk, Cohere Multilingual Arbitrage[00:13:28] Speaker: Now we can go back to post training. I think there's a lot of interesting post training data sets out there. One that was released recently, the agent instructs by Microsoft where they basically try to target some specific skills. And improve the performance of models on them.[00:13:43] Speaker: For example, here, you can see code, brain teasers, open domain QA, and they managed to get a dataset that outperforms that's when fine tuning Mistral 7b on it, it outperforms the original instruct model that was released by Mistral. And as I said, to get good synthetic data, you really [00:14:00] have to have a framework to make sure that your data is diverse.[00:14:03] Speaker: So for example, for them, they always. And then they see the generations on either source code or raw text documents, and then they rewrite them to make sure they're easier to generate instructions from, and then they use that for their like instruction data generation. There's also the Tool3SFT mixture, which was released recently by Allen AI.[00:14:23] Speaker: It's also really good quality and it covers a wide range of tasks. And the way they make sure that this dataset is diverse is by using personas from the persona hub datasets. Which is basically a data set of like I think over a million personas. And for example, in the tool mixture to generate like a new code snippet, they would give like the model persona, for example, a machine learning researcher interested in neural networks, and then ask it to generate like a coding problem.[00:14:49] Speaker: This way you make sure that your data set is really diverse, and then you can further filter the data sets, for example, using the reward models. We also released a dataset called Smalltalk, [00:15:00] and we also tried to cover the wide range of tasks, and as you can see here, for example, when fine tuning Mistral 7b on the dataset, we also outperformed the original Mistral instructs on a number of benchmarks, notably on mathematics and instruction following with ifevil.[00:15:18] Speaker: Another paper that's really interesting I wanted to mention is this one called Multilingual Data Arbitrage by Cohere. And basically they want to generate a data set for post training that is multilingual. And they have a really interesting problem. It's the fact that there isn't like one model that's really good at all the languages they wanted.[00:15:36] Speaker: So what they do is that like they use not just one teacher model, but multiple teachers. And then they have a router which basically sends the prompts they have to all these models. And then they get the completions and they have a reward model that traces all these generations and only keeps the best one.[00:15:52] Speaker: And this is like arbitrage and finance. So well, I think what's interesting in this, it shows that like synthetic data, it doesn't have to come from a single model. [00:16:00] And because we have so many good models now, you could like pull these models together and get like a dataset that's really high quality and that's diverse and that's covers all your needs.[00:16:12] Speaker: I was supposed to put a meme there, but. Yeah, so that was it for like a synthetic data.[00:16:17] Smol Models[00:16:17] Speaker: Now we can go to see what's happening in the small models field in 2024. I don't know if you know, but like now we have some really good small models. For example, Lama 3. 2 1B is. It matches Lama 2. 13b from, that was released last year on the LMSYS arena, which is basically the default go to leaderboard for evaluating models using human evaluation.[00:16:39] Speaker: And as you can see here, the scores of the models are really close. So I think we've made like hugely forward in terms of small models. Of course, that's one, just one data point, but there's more. For example, if you look at this chart from the Quint 2. 5 blog post, it shows that today we have some really good models that are only like 3 billion parameters [00:17:00] and 4 billion that score really high on MMLU.[00:17:03] Speaker: Which is a really popular benchmark for evaluating models. And you can see here that the red, the blue dots have more than 65 on MMLU. And the grey ones have less. And for example, Llama33b had less. So now we have a 3b model that outperforms a 33b model that was released earlier. So I think now people are starting to realize that like, we shouldn't just scale and scale models, but we should try to make them more efficient.[00:17:33] Speaker: I don't know if you knew, but you can also chat with a 3B plus model on your iPhone. For example, here, this is an app called PocketPal, where you can go and select a model from Hugging Face. It has a large choice. For example, here we loaded the 5. 3. 5, which is 3. 8 billion parameters on this iPhone. And we can chat with this and you can see that even the latency is also acceptable.[00:17:57] Speaker: For example, here, I asked it to give me a joke about [00:18:00] NeurIPS. So let's see what it has to say.[00:18:06] Speaker: Okay, why did the neural network attend NeurIPS? Because it heard there would be a lot of layers and fun and it wanted to train its sense of humor. So not very funny, but at least it can run on device. Yeah, so I think now we have good small models, but we also have like good frameworks and tools to use these small models.[00:18:24] On Device Models[00:18:24] Speaker: So I think we're really close to having like really on edge and on device models that are really good. And I think for a while we've had this narrative. But just training larger models is better. Of course, this is supported by science scaling laws. As you can see here, for example, when we scale the model size, the loss is lower and obviously you get a better model.[00:18:46] Speaker: But and we can see this, for example, in the GPT family of models, how we went from just a hundred million parameters to more than a trillion. parameters. And of course, we all observed the performance improvement when using the latest model. But [00:19:00] one thing that we shouldn't forget is that when we scale the model, we also scale the inference costs and time.[00:19:05] Speaker: And so the largest models were are going to cost so much more. So I think now instead of just building larger models, we should be focusing on building more efficient models. It's no longer a race for the largest models since these models are really expensive to run and they require like a really good infrastructure to do that and they cannot run on, for example, consumer hardware.[00:19:27] Speaker: And when you try to build more efficient models that match larger models, that's when you can really unlock some really interesting on device use cases. And I think a trend that we're noticing now is the trend of training smaller models longer. For example, if you compare how much, how long LLAMA was trained compared to LLAMA3, there is a huge increase in the pre training length.[00:19:50] Speaker: LLAMA was trained on 1 trillion tokens, but LLAMA3 8b was trained on 15 trillion tokens. So Meta managed to get a model that's the same size, but But it performs so much [00:20:00] better by choosing to like spend the sacrifice during training, because as we know, training is a one time cost, but inference is something that's ongoing.[00:20:08] Speaker: If we want to see what are like the small models reads in 2024, I think this mobile LLM paper by Meta is interesting. They try to study different models that are like have the less than 1 billion parameters and find which architecture makes most sense for these models. For example, they find that depth is more important than width.[00:20:29] Speaker: So it's more important to have models that have like more layers than just one. making them more wide. They also find that GQA helps, that tying the embedding helps. So I think it's a nice study overall for models that are just a few hundred million parameters. There's also the Apple intelligence tech report, which is interesting.[00:20:48] Speaker: So for Apple intelligence, they had two models, one that was like on server and another model that was on device. It had 3 billion parameters. And I think the interesting part is that they trained this model using [00:21:00] pruning. And then distillation. And for example, they have this table where they show that, like, using pruning and distillation works much better than training from scratch.[00:21:08] Speaker: And they also have some interesting insights about, like, how they specialize their models on specific tasks, like, for example, summarization and rewriting. There's also this paper by NVIDIA that was released recently. I think you've already had a talk about, like, hybrid models that was all interesting.[00:21:23] Speaker: And this model, they used, like, a hybrid architecture between state space models and transformers. And they managed to train a 1B model that's really performant without needing to train it on a lot of tokens. And regarding our work, we just recently released SmallM2, so it's a series of three models, which are the best in class in each model size.[00:21:46] Speaker: For example, our 1. 7b model outperforms Lama 1b and also Qt 2. 5. And how we managed to train this model is the following. That's where you spent a lot of time trying to curate the pre training datasets. We did a lot of [00:22:00] ablations, trying to find which datasets are good and also how to mix them. We also created some new math and code datasets that we're releasing soon.[00:22:08] Speaker: But you basically really spent a lot of time trying to find what's the best mixture that you can train these models on. And then we spent some time trying to like we also trained these models for very long. For example, small M1 was trained only on 1 trillion tokens, but this model is trained on 11 trillion tokens.[00:22:24] Speaker: And we saw that the performance kept improving. The models didn't really plateau mid training, which I think is really interesting. It shows that you can train such small models for very long and keep getting performance gains. What's interesting about SmallLM2 is that it's fully open. We also released, like the pre training code base, the fine tuning code, the datasets, and also evaluation in this repository.[00:22:45] Smol Vision Models[00:22:45] Speaker: Also there's, like, really interesting small models for text, but also for vision. For example, here you can see SmallVLM, which is a 2B model that's really efficient. It doesn't consume a lot of RAM, and it also has a good performance. There's also Moondream 0. [00:23:00] 5b, which was released recently. It's like the smallest visual language model.[00:23:04] Speaker: And as you can see, there isn't like a big trade off compared to Moondream 2b. So now I showed you that we have some really good small models. We also have the tools to use them, but why should you consider using small models and when? I think, like, small models are really interesting because of the on device feature.[00:23:23] Speaker: Because these models are small and they can run fast, you can basically run them on your laptop, but also on your mobile phone. And this means that your dataset stays locally. You don't have to send your queries to third parties. And this really enhances privacy. That was, for example, one of the big selling points for Apple Intelligence.[00:23:42] Speaker: Also, right now, we really have a lot of work to do. So many frameworks to do on device inference. For example, there's MLX, MLC, Llama, CPP, Transformers, JS. So we have a lot of options and each of them have like great features. So you have so many options for doing that. Small models are also really powerful if you choose to specialize them.[00:24:00][00:24:00] Speaker: For example, here there's a startup called Numind, which took small LM and then they fine tuned it on text extraction datasets. And they managed to get a model that's not very far from models that are much larger. So I think text extraction is like one use case where small models can be really performant and it makes sense to use them instead of just using larger models.[00:24:19] Speaker: You can also chat with these models in browser. For example, here, you can go there, you can load the model, you can even turn off your internet and just start chatting with the model locally. Speaking of text extraction, if you don't want to fine tune the models, there's a really good method of structure generation.[00:24:36] Speaker: We can basically force the models to follow a JSON schema that you defined. For example, here, we try to force the model to follow a schema for extracting key information from GitHub issues. So you can input free text, which is a complaint about a GitHub repository, something not working. And then you can run it there and the model can extract anything that is relevant for your GitHub issue creation.[00:24:58] Speaker: For example, the [00:25:00] priority, for example, here, priority is high, the type of the issue bug, and then a title and the estimation of how long this will take to fix. And you can just like do this in the browser, you can transform your text into a GitHub issue that's properly formatted.[00:25:14] What's Next[00:25:14] Speaker: So what's next for synthetic data and small models?[00:25:18] Speaker: I think that domain specific synthetic data is going to be, it's already important, it's going to be even more important. For example, generating synthetic data for math. I think this really would help improve the reasoning of a lot of models. And a lot of people are doing it, for example, Quint 2. 12 math, everyone's trying to reproduce a one.[00:25:37] Speaker: And so I think for synthetic data, trying to specialize it on some domains is going to be really important. And then for small models, I think specializing them through fine tuning, it's also going to be really important because I think a lot of companies are just trying to use these large models because they are better.[00:25:53] Speaker: But on some tasks, I think you can already get decent performance with small models. So you don't need to Pay like a [00:26:00] cost that's much larger just to make your model better at your task by a few percent. And this is not just for text. And I think it also applies for other modalities like vision and audio.[00:26:11] Speaker: And I think you should also watch out for on device frameworks and applications. For example, like the app I showed, or lama, all these frameworks are becoming really popular and I'm pretty sure that we're gonna get like more of them in 2025. And users really like that. Maybe for other, I should also say hot take.[00:26:28] Speaker: I think that like in AI, we just started like with fine tuning, for example, trying to make BERT work on some specific use cases, and really struggling to do that. And then we had some models that are much larger. So we just switched to like prompt engineering to get the models And I think we're going back to fine tuning where we realize these models are really costly.[00:26:47] Speaker: It's better to use just a small model or try to specialize it. So I think it's a little bit of a cycle and we're going to start to see like more fine tuning and less of just like a prompt engineering the models. So that was my talk. Thank you for following. And if you have [00:27:00] any questions, we can take them now. Get full access to Latent Space at www.latent.space/subscribe

Stay Forever
Deus Ex: Wusstet ihr eigentlich ...?

Stay Forever

Play Episode Listen Later Dec 21, 2024 92:07


(Weihnachtswoche, Tag 4) "Wusstet ihr eigentlich …?" ist unser Name für "Zusatzfolgen" zu großen Podcasts. In ihnen kommen alle Infos, Anekdoten, O-Töne und andere Dinge unter, die in der Hauptfolge keinen Platz hatten, vergessen wurden oder uns zu spät eingefallen sind. Das Format ist eines unserer beliebesten und häufigsten Unterstützerformate: Allein in diesem Jahr gab es (mit dieser) 16 Ausgaben davon. 6 davon bezogen sich auf SF-Folgen, 7 auf SSF und 3 auf SFT. Diesmal geht es um Deus Ex, das ist keine Wiederveröffentlichung aus dem Steady- oder Patreon-Feed, sondern eine brandneue Folge! Darin besprechen wir… … das Geheimnis des fehlenden Empire State Buildings … warum die Leitern im Spiel so schlecht funktionieren … den Besuch der Matrix und der Ion-Storm-Büros via Deus Ex … kuriose Bugs und zwei NPCs auf der Suche nach Sitzgelegenheiten Und als Bonus erklärt uns Verschwörungsexperte Christian Beuster die Wahrheit hinter zwei Verschwörungsmythen des Spiels.

The Lunar Society
Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory

The Lunar Society

Play Episode Listen Later Nov 13, 2024 96:43


Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute.Read the full transcript here.Sponsors:* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go here: https://jane-st.co/dwarkesh* Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models' reasoning, coding, and multimodal capabilities. Learn more at turing.com/dwarkesh.* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.If you're interested in advertising on the podcast, check out this page.Timestamps00:00:00 - Anonymity00:01:09 - Automating Steve Jobs00:04:38 - Isaac Newton's theory of progress00:06:36 - Grand theory of intelligence00:10:39 - Seeing scaling early00:21:04 - AGI Timelines00:22:54 - What to do in remaining 3 years until AGI00:26:29 - Influencing the shoggoth with writing00:30:50 - Human vs artificial intelligence00:33:52 - Rabbit holes00:38:48 - Hearing impairment00:43:00 - Wikipedia editing00:47:43 - Gwern.net00:50:20 - Counterfactual careers00:54:30 - Borges & literature01:01:32 - Gwern's intelligence and process01:11:03 - A day in the life of Gwern01:19:16 - Gwern's finances01:25:05 - The diversity of AI minds01:27:24 - GLP drugs and obesity01:31:08 - Drug experimentation01:33:40 - Parasocial relationships01:35:23 - Open rabbit holes Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

GBF - Gay Buddhist Forum
Somatic Meditation: The Anatomy of Practice - David Moreno

GBF - Gay Buddhist Forum

Play Episode Listen Later Oct 27, 2024 69:50


In this welcome departure from our usual dharma talks, David Moreno guides us in weaving sitting practice with the Tantric practice of Yoga Nidra and the energetic practice of Qi Gong. These processes augment and integrate meditation into moving mindfulness. Yet, they are complete meditations in themselves. Throughout this session, he encourages us to allow the movements to help us “feel more, think less.”WATCH this interactive talk and find the quotes that David shares on our website: https://gaybuddhist.org/podcast/somatic-meditation-the-anatomy-of-practice-david-moreno/______________David Moreno, RYT 500, YACEP, SFT, has taught at international yoga conferences, festivals, universities, and teacher trainings worldwide. He continues his study in Tantra, Ayurveda, meditation and Qi Gong. He is known for his depth, keen sense of humor and timing, making his teaching both playful and informative. His yoga commentaries have been published in Yoga Journal, Yoga International, LA Yoga Magazine, and Common Ground. His dance criticism and performing arts journalism are featured in Culture Vulture. David is also an ordained minister. He also teaches day-long and weekend-long mindfulness movement and sitting retreats, called Deliberate Stillness, at Green Gulch Zen Center. Learn more at https://moryoga.com/retreats/deliberate-stillness-daylong-2024/ ______________ To support our efforts to share these talks with LGBTQIA audiences worldwide, please visit https://gaybuddhist.org/There you can: Donate Learn how to participate live Find our schedule of upcoming speakers Join our mailing list or discussion forum Enjoy many hundreds of these recorded talks dating back to 1996 CREDITSAudio Engineer: George HubbardProducer: Tom BrueinMusic/Logo/Artwork: Derek Lassiter

Motley Fool Money
Stocks as FOMO Insurance

Motley Fool Money

Play Episode Listen Later Oct 19, 2024 32:08


Nobody wants to miss out on the next big thing. But “the next big thing” may, in fact, be nothing more than a dud. How can investors find the happy medium FOMO and foresight?  Senior Fool Analyst Asit Sharma joins Ricky Mulvey for a conversation on the different reasons why investors buy stocks. They also discuss: What we can learn from King Charles' portfolio.  The math of winners vs. losers. How to think about expected value. Tickers mentioned: SFT, PLTR, BTC, CRSP, RKLB, USHY Host: Ricky Mulvey Guest: Asit Sharma Producer: Mary Long Engineer: Tim Sparks, Austin Morgan Learn more about your ad choices. Visit megaphone.fm/adchoices

Papers Read on AI
LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs

Papers Read on AI

Play Episode Listen Later Aug 21, 2024 38:53


Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning (SFT). In other words, their output limitation is due to the scarcity of long-output examples in existing SFT datasets. To address this, we introduce AgentWrite, an agent-based pipeline that decomposes ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we construct LongWriter-6k, a dataset containing 6,000 SFT data with output lengths ranging from 2k to 32k words. By incorporating this dataset into model training, we successfully scale the output length of existing models to over 10,000 words while maintaining output quality. We also develop LongBench-Write, a comprehensive benchmark for evaluating ultra-long generation capabilities. Our 9B parameter model, further improved through DPO, achieves state-of-the-art performance on this benchmark, surpassing even much larger proprietary models. In general, our work demonstrates that existing long context LLM already possesses the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability. Our code&models are at: https://github.com/THUDM/LongWriter. 2024: Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li https://arxiv.org/pdf/2408.07055

Rashad in Conversation
Breaking down Barriers in Scotland with Colin Campbell

Rashad in Conversation

Play Episode Listen Later Aug 19, 2024 23:49


Colin Campbell is an Associate Director in the Improving Project Delivery team with the Scottish Futures Trust (SFT). The Scottish Government established SFT in 2008 to act as an independent centre of expertise to deliver improvements in the public sector's planning, innovation, delivery and management of infrastructure. Colin's work within SFT has been focused on improving construction quality since 2018. He is a chartered civil engineer, a member of the Association for Project Management and has over 44 years experience in the construction industry. Prior to joining SFT he was involved in the delivery of projects across the UK and in Singapore, Eastern Europe and the Middle East. He is currently co-chair of the Construction Quality Improvement Collaborative (CQIC), a sector wide campaign for improving construction quality, with a reach across Scotland. 

Meet the Farmers
The Sustainable Food and Farming Pioneer - Patrick Holden

Meet the Farmers

Play Episode Listen Later Aug 12, 2024 58:37


Image credit: Sustainable Food TrustMeet the Farmers is produced by RuralPod Media, the only specialist rural podcast production agency. Please note that this podcast does not constitute advice. Our podcast disclaimer can be found here. About Ben and  RuralPod MediaBen Eagle is the founder and Head of Podcasts at RuralPod Media, a specialist rural podcast production agency. He is also a freelance rural affairs and agricultural journalist. You can find out more at ruralpodmedia.co.uk or benjamineagle.co.uk If you have a business interested in getting involved with podcasting check us out at RuralPod Media. We'd love to help you spread your message. Please subscribe to the show and leave us a review wherever you are listening. Follow us on social mediaInstagram @mtf_podcastTwitter @mtf_podcastWatch us on Youtube here

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you see this in time, join our emergency LLM paper club on the Llama 3 paper!For everyone else, join our special AI in Action club on the Latent Space Discord for a special feature with the Cursor cofounders on Composer, their newest coding agent!Today, Meta is officially releasing the largest and most capable open model to date, Llama3-405B, a dense transformer trained on 15T tokens that beats GPT-4 on all major benchmarks:The 8B and 70B models from the April Llama 3 release have also received serious spec bumps, warranting the new label of Llama 3.1.If you are curious about the infra / hardware side, go check out our episode with Soumith Chintala, one of the AI infra leads at Meta. Today we have Thomas Scialom, who led Llama2 and now Llama3 post-training, so we spent most of our time on pre-training (synthetic data, data pipelines, scaling laws, etc) and post-training (RLHF vs instruction tuning, evals, tool calling).Synthetic data is all you needLlama3 was trained on 15T tokens, 7x more than Llama2 and with 4 times as much code and 30 different languages represented. But as Thomas beautifully put it:“My intuition is that the web is full of s**t in terms of text, and training on those tokens is a waste of compute.” “Llama 3 post-training doesn't have any human written answers there basically… It's just leveraging pure synthetic data from Llama 2.”While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:* SFT for Code: 3 approaches for synthetic data for the 405B bootstrapping itself with code execution feedback, programming language translation, and docs backtranslation.* SFT for Math: The Llama 3 paper credits the Let's Verify Step By Step authors, who we interviewed at ICLR:* SFT for Multilinguality: "To collect higher quality human annotations in non-English languages, we train a multilingual expert by branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingualtokens."* SFT for Long Context: "It is largely impractical to get humans to annotate such examples due to the tedious and time-consuming nature of reading lengthy contexts, so we predominantly rely on synthetic data to fill this gap. We use earlier versions of Llama 3 to generate synthetic data based on the key long-context use-cases: (possibly multi-turn) question-answering, summarization for long documents, and reasoning over code repositories, and describe them in greater detail below"* SFT for Tool Use: trained for Brave Search, Wolfram Alpha, and a Python Interpreter (a special new ipython role) for single, nested, parallel, and multiturn function calling.* RLHF: DPO preference data was used extensively on Llama 2 generations. This is something we partially covered in RLHF 201: humans are often better at judging between two options (i.e. which of two poems they prefer) than creating one (writing one from scratch). Similarly, models might not be great at creating text but they can be good at classifying their quality.Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.Llama2 was also used as a classifier for all pre-training data that went into the model. It both labelled it by quality so that bad tokens were removed, but also used type (i.e. science, law, politics) to achieve a balanced data mix. Tokenizer size mattersThe tokens vocab of a model is the collection of all tokens that the model uses. Llama2 had a 34,000 tokens vocab, GPT-4 has 100,000, and 4o went up to 200,000. Llama3 went up 4x to 128,000 tokens. You can find the GPT-4 vocab list on Github.This is something that people gloss over, but there are many reason why a large vocab matters:* More tokens allow it to represent more concepts, and then be better at understanding the nuances.* The larger the tokenizer, the less tokens you need for the same amount of text, extending the perceived context size. In Llama3's case, that's ~30% more text due to the tokenizer upgrade. * With the same amount of compute you can train more knowledge into the model as you need fewer steps.The smaller the model, the larger the impact that the tokenizer size will have on it. You can listen at 55:24 for a deeper explanation.Dense models = 1 Expert MoEsMany people on X asked “why not MoE?”, and Thomas' answer was pretty clever: dense models are just MoEs with 1 expert :)[00:28:06]: I heard that question a lot, different aspects there. Why not MoE in the future? The other thing is, I think a dense model is just one specific variation of the model for an hyperparameter for an MOE with basically one expert. So it's just an hyperparameter we haven't optimized a lot yet, but we have some stuff ongoing and that's an hyperparameter we'll explore in the future.Basically… wait and see!Llama4Meta already started training Llama4 in June, and it sounds like one of the big focuses will be around agents. Thomas was one of the authors behind GAIA (listen to our interview with Thomas in our ICLR recap) and has been working on agent tooling for a while with things like Toolformer. Current models have “a gap of intelligence” when it comes to agentic workflows, as they are unable to plan without the user relying on prompting techniques and loops like ReAct, Chain of Thought, or frameworks like Autogen and Crew. That may be fixed soon?

Sweet Film Talk
Take 285 - Pod d'Or: Keeks Cannes Trip Recap and Film Reviews ft Jake Hamblin & Jack Ahlander

Sweet Film Talk

Play Episode Listen Later May 28, 2024 81:41


We talk about EVERYTHING. What to plan when you go, where to stay, what to do, what to avoid, and then we chat, review, and rank every film we saw there. 12 films that were all outstanding. Enjoy and cheers to 6 years of SFT. Here's hoping to have our own Palm d'Or in 6 more years. Love y'all and stay soooooooo sweeeeeeeett --- Support this podcast: https://podcasters.spotify.com/pod/show/sweetfilmtalk/support

Six-Figure Trucker
EP103: Becoming a Driveaway Millionaire with Jose Palma

Six-Figure Trucker

Play Episode Listen Later Apr 18, 2024 19:28


We're back with another interview on location at the Mid-America Truck Show! This time, we welcome back an SFT favorite as Jose Palma slides behind the mic. Jose is known and loved for his big personality, but today, he drops the mic on something equally big—his career earnings. In seven short years, Jose Palma is about to become a Driveaway Millionaire. That's no typo because the earning power in Driveaway is no joke! Listen in as Jose extols the benefits of Driveaway with Norton Transport and hypes the sights and sounds of the Mid-America Truck Show on this edition of the #SixFigureTruckerThe Six Figure Trucker is a weekly conversation that shares the strategies and stories that successful drivers have used to build lucrative careers in the trucking industry. For more information or to subscribe, please visit https://www.six-figuretrucker.com/.   

Irish Tech News Audio Articles
New Smarter Factory Technology Gateway to Propel Irish Companies into a Digital Future

Irish Tech News Audio Articles

Play Episode Listen Later Apr 8, 2024 3:47


The new Smarter Factory Technology Gateway (SFT) has launched at the Technological University of the Shannon (TUS) Moylish Campus. It is already providing an investment of €1.8 million in the Midwest and paving the way for enhanced enterprise and business security. The SFT is poised to support Irish companies in navigating and embracing the challenges faced by the digital landscape. Serving as a vital conduit between industry and academia, namely TUS, the SFT will forge collaborative ties with cutting-edge research institutions, bridging the gap between students, academia and enterprise. The data furnished by the gateway will empower businesses, fortifying them against digital threats and challenges in real-time, fostering smarter and more integrated operations. As technology continues to advance and digitalisation takes centre stage, the gateway aims to capitalise on the region's smart specialisations, particularly in advanced manufacturing, ICT, digital transformation, artificial intelligence, and virtual reality. These efforts will lead to tangible improvements in real-time systems, energy efficiency, and operational efficacy. Smarter Factory Technology Gateway Manager at TUS, Jim O'Hagan, remarked, "Our gateway will enhance the products, operations, efficiencies, and digital transformation of Irish manufacturing businesses through innovative research projects, funded by Enterprise Ireland grants. Through our advanced approach, we are poised as a catalyst for innovation and will be pivotal in meeting the evolving demands of customers, positioning companies for sustained growth and competitiveness." Explaining that the SFT will accelerate the development of future talent and innovation in the region Marina Donohoe, Head of Research and Innovation, Enterprise Ireland said, "New insights from smarter data will deliver many benefits for Irish businesses, such as improved quality, increased capacity, cost reductions and more sustainable operations. This important initiative will also lead to impactful innovation and will support businesses on their sustainable and digital journeys, to support innovative enterprises as they navigate the increasingly diverse and evolving digital landscape, and shape the future of manufacturing." Businesses can readily engage with the gateway through initiatives such as the Enterprise Ireland Innovation Voucher schemes or funded research programmes. The SFT stands as a beacon of excellence in the realm of Industry 4.0 and 5.0, spearheading digital and sustainable transformations to bolster innovative enterprises and shape the future landscape of manufacturing. The Smarter Factory Technology Gateway is co-funded by the Government of Ireland and the European Union through the ERDF Southern, Eastern & Midlands Regional Programme 2021-27. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Six-Figure Trucker
EP101: The Passion of Big E at the Truck Show

Six-Figure Trucker

Play Episode Listen Later Apr 4, 2024 19:39


We're live at the Mid-America Truck Show with Big E, Eric Ryan! Eric drove 650 miles to Louisville to take in all the Trucking eye candy and join us for this live recording. We certainly appreciate that because chatting with Big E is always a treat. He's a man who loves all things trucking and combines that with a fire that burns for progress and success. There's no question that Big E has “that dog” in him. In this episode, Eric gets personal as he talks about his fitness journey and the joy of sharing his passion for trucks with his boy. Prepare to be inspired as we grab twenty minutes with an SFT favorite, Eric Ryan, on this special edition on the road at the Mid-America Truck Show! #SixFigureTruckerShow Notes:We're Live with Big E from the Mid-America Truck Show! (1:04)Meeting Influencers and admiring Eye Candy at the MATS (1:55)Big E talks his love for Trucks and all things Trucking (5:30)From failed owner-operator to deep pockets in Driveaway (9:20)As a Teacher/Trainer, Big E brings the Wisdom! (11:42)The Top Traits of a Deck Driver (14:40)The inspiring backstory of Big E and his fitness journey (16:05)Keep Truckin', Big E!The Six-Figure Trucker is a weekly conversation that shares the strategies and stories that successful drivers have used to build lucrative careers in the driveaway trucking industry. For more information or to subscribe, please visit https://www.six-figuretrucker.com/. The Six-Figure Trucker is a weekly podcast about driveaway trucking brought to you by Norton Transport.

Ideias Radicais
(YT) O Código de Defesa do Sonegador de Impostos?

Ideias Radicais

Play Episode Listen Later Jan 12, 2024


Mais uma conquistas pela liberdade, senhores. O Gabinete Liberdade, projeto do Ideias Radicais, aprovou em Marechal Cândido Rondon, com um dos nosso vereadores parceiros, o Código de defesa do pagador de impostos, também chamado por alguns aí de Código de defesa do Sonegador (o que é incrível). Esse lei coloca o pagador de impostos em uma situação desfavorável juridicamente perante ao estado, colocando alguns freios em algumas atitudes exploratórias e autoritárias. Menos impostos para o Haddad, Lula e estado, mais brasileiros felizes. Chopp sem imposto SP: https://forms.gle/cJJjTtU2QUTFjAV56 Jornada do NOVO: https://novo.org.br/jornada2024/ Quer fugir do Brasil? Nos contate: https://www.settee.io/ https://youtube.com/c/Setteeio Nos acompanhe no Telegram: https://t.me/ideiasradicais Quer comprar Bitcoin no melhor preço do mercado? Bity! https://bit.ly/BityIdeiasRadicais 00:00 - Introdução e explicação 04:56 - Equivalência em taxas 07:48 - Assinatura Digital 07:57 - Reparação de Danos 08:49 - Ordem de fiscalização 10:26 - Respeito a súmulas do SFT e STJ 11:31 - Direito de defesa antes da autuação 13:00 - E agora, o que fazemos? 16:41 - Outra lei aprovada 20:49 - Declarações finais

Stay Forever
Sensible Soccer: Wusstet ihr eigentlich ...?

Stay Forever

Play Episode Listen Later Dec 20, 2023 77:23


Wir hatten noch Gesprächsbedarf zum Thema Sensible Soccer und haben uns nochmal hingesetzt und über einige offene Fragen geredet, etwa warum eigentlich der Musiker Captain Sensible im Soundtrack vorkommt, was Sensible Soccer mit Segas World Championship Soccer II gemein hat und warum Leute heutzutage noch SWOS spielen. Bei letzterer Frage hatten wir Unterstützung vom Sensible-Experten Michael Jänsch, der mit sensiblesoccer.de die größte Fanseite zum Thema betreut. Vielen Dank für das Interview, Michael! Hinweis: Dieser Podcast ist ein Beispiel für unser Format „Wusstet ihr eigentlich… ?“, das ist eine Serie von Begleitfolgen zu den großen Hauptfolgen, die wir normalerweise exklusiv für unsere Unterstützer veröffentlichen. Darin arbeiten wir unbenutzte Recherche auf, führen aber zuweilen auch extra Interviews und gehen mit intensiver Recherche Randfragen nach, die im normalen Podcast unbeantwortet geblieben sind. Derlei Folgen machen wir zu den meisten der Hauptfolgen von SF, SSF und SFT. Alle diese und massenhaft andere Formate gibt es auf Patreon und Steady für Unterstützer ab ca. 5 Euro monatlich.

The Nonlinear Library
AF - Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation by Soroush Pour

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 3:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation, published by Soroush Pour on November 7, 2023 on The AI Alignment Forum. Paper coauthors: Rusheb Shah, Quentin Feuillade--Montixi, Soroush J. Pour, Arush Tagade, Stephen Casper, Javier Rando. Motivation Our research team was motivated to show that state-of-the-art (SOTA) LLMs like GPT-4 and Claude 2 are not robust to misuse risk and can't be fully aligned to the desires of their creators, posing risk for societal harm. This is despite significant effort by their creators, showing that the current paradigm of pre-training, SFT, and RLHF is not adequate for model robustness. We also wanted to explore & share findings around "persona modulation"[1], a technique where the character-impersonation strengths of LLMs are used to steer them in powerful ways. Summary We introduce an automated, low cost way to make transferable, black-box, plain-English jailbreaks for GPT-4, Claude-2, fine-tuned Llama. We elicit a variety of harmful text, including instructions for making meth & bombs. The key is *persona modulation*. We steer the model into adopting a specific personality that will comply with harmful instructions.We introduce a way to automate jailbreaks by using one jailbroken model as an assistant for creating new jailbreaks for specific harmful behaviors. It takes our method less than $2 and 10 minutes to develop 15 jailbreak attacks. Meanwhile, a human-in-the-loop can efficiently make these jailbreaks stronger with minor tweaks. We use this semi-automated approach to quickly get instructions from GPT-4 about how to synthesise meth . Abstract Despite efforts to align large language models to produce harmless responses, they are still vulnerable to jailbreak prompts that elicit unrestricted behaviour. In this work, we investigate persona modulation as a black-box jailbreaking method to steer a target model to take on personalities that are willing to comply with harmful instructions. Rather than manually crafting prompts for each persona, we automate the generation of jailbreaks using a language model assistant. We demonstrate a range of harmful completions made possible by persona modulation, including detailed instructions for synthesising methamphetamine, building a bomb, and laundering money. These automated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is 185 times larger than before modulation (0.23%). These prompts also transfer to Claude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%, respectively. Our work reveals yet another vulnerability in commercial large language models and highlights the need for more comprehensive safeguards. Full paper You can find the full paper here on arXiv https://arxiv.org/abs/2311.03348 . Safety and disclosure We have notified the companies whose models we attacked We did not release prompts or full attack details We are happy to collaborate with researchers working on related safety work - please reach out via correspondence emails in the paper. Acknowledgements Thank you to Alexander Pan and Jason Hoelscher-Obermaier for feedback on early drafts of our paper. ^ Credit goes to @Quentin FEUILLADE--MONTIXI for developing the model psychology and prompt engineering techniques that underlie persona modulation. Our research built upon these techniques to automate and scale them as a red-teaming method for jailbreaks. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Passing the Counseling NCMHCE narrative exam
Theory to Therapy - Solution Focused Therapy

Passing the Counseling NCMHCE narrative exam

Play Episode Play 36 sec Highlight Listen Later Aug 4, 2023 16:38 Transcription Available


What if you could equip a struggling adolescent with the tools to tackle their own challenges, empowering them to transform their lives from the inside out? That's just what we're exploring as Dr. Linton Hutchinson, and I delve into a fascinating case study involving Gracie, a 14-year-old girl facing difficulties at school and bullying. Join us on this journey as we underscore the significance of understanding, rapport building, and validation in the therapeutic process and how these elements can help Gracie reclaim joy and control in her life.Ever wondered about the power of a 'miracle question'? By shifting the focus from problems to strengths, we reveal how Solution-Focused Therapy can be a game-changer for adolescents like Gracie. It's all about encouraging self-discovery, fostering resilience, and letting the client lead the way toward their own solutions. Don't just listen — join the conversation and discover how you can transform theory into powerful, practical strategies. This isn't just a podcast episode; it's a masterclass in empowering change, one solution at a time. Don't miss it!If you need to study for your NCMHCE narrative exam, try the free samplers at: CounselingExam.comThis podcast is not associated with the National Board of Certified Counselors (NBCC) or any state or governmental agency responsible for licensure.

Sweet Film Talk
Take 220 - Who's Bringing Home the Gold? Oscar Nomination Reactions & Predictions + Most Overlooked 2022 New Releases

Sweet Film Talk

Play Episode Listen Later Jan 30, 2023 64:14


audio, you gotta love it :) SFT's first Sundance (3:30) - Underrated: The Stephen Curry Story - L'immensita - In My Mother's Skin - You Hurt My Feelings OSCAR Wants & Will Win's (22:00) - Sound (22:00) - Musical Score (23:10) - Make-up & Hairstyling (24:05) - Live Action Short (25:15) - Costume Design (26:30) - Animated Short (27:30) - Visual Effects (29:00) - Production Design (30:00) - Original Song (31:20) - International Feature (33:10) - Editing (34:00) - Documentary Short (35:35) - Documentary (36: 50) - Cinematography (38:10) - Actor (39:10) - Actress (40:50) - Supporting Actor (42:00) - Supporting Actress (42:50) - Animated Feature (43:45) - Adapted Screenplay (44:45) - Original Screenplay (46:40) - Director (47:35) - BEST PICTURE AKA BEST FILM (48:45) Most Overlooked of 2022 (55:30) ON THE SLATE: MISSING/AUDIBLE wowowowowowowwo it's oscar season and we can't wait to see how close our predictions are going to be. We always love this time of the year and look forward to celebrating movies ALL DAY, ALL YEAR LONG, BABY!!! --- Support this podcast: https://anchor.fm/sweetfilmtalk/support

A Mediocre Time with Tom and Dan
702 - Casselberry Roostertail

A Mediocre Time with Tom and Dan

Play Episode Listen Later Jan 6, 2023 118:51


SFT! Thanks so much to all of you who joined us live today! Not a bad crowd for the first real FFS of the new year! I hope you guys had as much fun as we did, and we'll see some of you turds at the Solar Bears tonight! (Check our website to snag those last-minute seats in section 118! But it's gonna be tight! Busdeker is bringing like 20 f'n people! Come on out!) Ah...the show! Let's GO! On this week's show: * Ross tells his side of the story from the Solar Bears' intermission  * Afroman's raid song * Confrontation of the week * Homeless people living in parking lots  * Casselberry rooster tail * Firework Matlock  * Black. White. reality show * Tom wants a new tradition where the father of the bride gets acknowledged for paying for the wedding * Price of dog being neutered  ### Have a lovely weekend, guys! We love ya! Thanks SO MUCH for listening and choosing to spend some time with us.  d

The Divorce Survival Guide Podcast
Divorcing with Children with Special Needs with Mary Anne Hughes

The Divorce Survival Guide Podcast

Play Episode Listen Later Oct 13, 2022 51:02


Mary Ann Hughes is the proud mother of two sons on opposite ends of the autism spectrum. Today she joins me for a conversation about going through a divorce when you have children with special needs. During her divorce, Mary Ann successfully advocated for her children's needs. As a result, she started Special Family Transitions to help families navigate the overwhelm and complexities of special needs divorce to get the best possible outcome, with as little time, money, and stress as possible. Today, she joins me for a conversation about navigating divorce in the midst of parenting (and eventually co-parenting) children with disabilities. Combining her experience and certifications as a Certified Divorce Coach, Certified Divorce Specialist, member of the National Association of Divorce Professionals, MBA, and years of special needs advocacy, Mary Ann is committed to supporting families with children with disabilities as a valued special needs divorce coach and consultant. Show Highlights Transitions can be hard for neurodivergent children – Mary Ann shares how parents approach the decision-making process of divorce The impact of divorce on children with disabilities How to co-parent with kids with special needs when a parent is not engaged or doesn't prioritize the children How and why you may want to set up a trust for your children What you need to know about divorce when you have kids on the spectrum Learn more about Mary Ann Hughes: As a mom of two boys on the autism spectrum who unexpectedly faced divorce after 21 years of marriage, Mary Ann Hughes had to learn how to navigate the complexities of special needs divorce, to effectively advocate for her children's needs and get a great result for her family in my divorce. Mary Ann formed Special Family Transitions and became a Special Needs Divorce Coach and Consultant so other moms of children with disabilities wouldn't have to spend the time, money, and emotional energy she did when faced with divorce. Mary Ann is on a mission to help mothers gain the confidence, skills and knowledge to successfully overcome the overwhelm and challenges of special needs divorce, to achieve the best possible result for their family.  Mary Ann combines her experiences as a Certified Divorce Coach, Certified Divorce Transition and Recovery Coach, Certified Divorce Specialist, Certified Life Coach, member of National Association of Divorce Professionals and NADP Special Needs Chapter, LoneStar LEND Leadership Education in Autism and Neurodevelopmental Disabilities Fellow, MBA with a successful career in Fortune 100 companies (pre-kids), and years of special needs training and advocacy, to help her clients effectively advocate for themselves and their children in special needs divorce. Resources & Links: Information and links may also be found here: https://kateanthony.com/podcast/divorcing-with-children-with-special-needs-with-mary-ann-hughes/ Grit and Grace Group Coaching is Open – Join us!Mary Ann's websiteMary Ann on FacebookMary Ann on InstagramMary Ann on YouTube Mary Ann on TikTokMary Ann on LinkedIn Mary Ann's course: Keys to Success in Divorce for Moms of Children with Special Needs – DSG listeners get 25% off with discount code SFT. THE M3ND PROJECT The M3ND Project's mission is to bring clarity and validation to victims and survivors and to provide tools and resources for those who are responding to abuse. Annette Oltmans founded The M3ND Project coming out of her own experience as a survivor of emotional abuse and double abuse and after years of researching academic materials and personally interviewing hundreds of abuse survivors, therapists, and faith leaders. M3ND does this by providing various educational resources and training courses. Sometimes, it can be hard to articulate what you are going through when you try to reach out to a friend or therapist for help, and it can make you feel crazy. As a survivor, I remember feeling this way. When I first came across Mend's Terms and Definitions tool, which names and explains covert abusive tactics, it was SO validating and illuminating. M3ND wants to share this resource with The Divorce Survival Guide Listeners for free!! Go get this tool that I think is so essential: Grab M3ND's Terms and Definitions Tool: https://kateanthony.com/mend JOIN THE SHOULD I STAY OR SHOULD I GO FACEBOOK GROUP