POPULARITY
El adiós de Modric y Ancelotti. El Celta finalmente jugará la Europa League. El Rayo irá a la Conference. Osasuna se queda sin Europa. Manolo González, Borja Iglesias, Aridane, Cobeño, Luis Chacón, protagonistas. Previas. El homenaje a Nadal. GP Mónaco de F1. GP Gran Bretaña MotoGP.
It's not every day a new park opens in the centre of a capital city, and this one is extra special. Copenhagen's new Opera Park is not just a nice place to relax in the shadow of the opera house. It represents a radical departure from the type of parks found elsewhere in the city: this harbourfront garden is a place for the contemplation of nature, of trees and plants from around the world, of water, and of sky. It's blessedly free from programming - there are no cycle paths, no running tracks, no outdoor gyms and no playgrounds. For this episode we are lucky enough to be joined by its designer, Maj Wiwe, landscape director at Cobe. She explains the original idea behind the park, but also the extraordinary technical challenges involved in constructing a mature garden with 10 metre-high trees and a cafe. On top of a multi-storey car park. Which is buried underground. On reclaimed land. In the harbour. You see what we mean by challenges?
Happy Monday! Today, we start off with Michelle Obama's recent scoop on White House living and President Trump's plan to reopen Alcatraz. We celebrate the Kentucky Derby champ before cheering Riley Gaines for exposing a boys‑in‑girls'‑sports fiasco in Maine. Then we dive into Ilhan Omar's eyebrow‑raising “fear white men” comment, gasp at the viral AI photo that turned Trump into the Pope, and enjoy JD Vance and Mike Lee's tongue‑in‑cheek reactions. We continue with Kristen Welker tagging along at Mar‑a‑Lago, where Trump tackles tough questions on due process, tariffs, and border security—then torches the “fake media” and the *Wall Street Journal* for good measure. Meanwhile, Elon Musk reveals DOGE deep‑dives, a mysterious new “Blindsight” gadget, and how he really wants to be remembered—all while CNN tries (and fails) to roast him. We wrap with Jen Psaki battling Karoline Leavitt, James Carville calling Republicans liars, and a quick “Chicks Update” from us.No one eats perfectly. Fill your nutrition gaps the easy way with Field of Greens! Use code CHICKS at https://FOGChicks.com to save 20% off your first order.Protect your skin year-round with OneSkin's broad-spectrum SPFs. Get 15% off with code CHICKS at https://OneSkin.coBe prepared for when you need medicine the most with All Family Pharmacy. Visit https://AllFamilyPharmacy.com/Chicks and use code CHICKS10 to get 10% off your entire orderRegister for Bulwark Capital's free “Tariff Edition” webinar on May 22 at 3:30pm Pacific. Sign up today at https://KnowYourRiskRadio.comCelebrate President Trump's first 100 days in office with the Big Beautiful Bundle from Republican Red Winery. Get 10% off PLUS free shipping at https://RepublicanRed.com with code CHICKS!
Dans cette vidéo, on plonge dans l'un des phénomènes les plus fascinants de l'astrophysique : le fond diffus cosmologique, ce rayonnement fossile émis 380 000 ans après le Big Bang, véritable première lumière de l'univers. Ce rayonnement cosmique, observable aujourd'hui sous forme de micro-ondes, est une archive précieuse de l'univers primordial. Grâce aux observations du télescope ACT (Atacama Cosmology Telescope) situé au Chili, une nouvelle avancée majeure a été réalisée, révélant des détails inédits sur la polarisation de la lumière, les variations de température dans l'univers naissant, et les mouvements de matière juste après l'ère de la recombinaison. On revient sur l'histoire de la découverte du fond diffus, des prédictions théoriques par George Gamow, Ralph Alpher et Robert Herman, jusqu'aux découvertes accidentelles de Penzias et Wilson, en passant par les observations de COBE, WMAP et Planck. Le fond diffus cosmologique n'est pas qu'une simple carte thermique : c'est une véritable empreinte du passé, un témoin direct des conditions initiales de l'univers, et une clé pour explorer des notions fondamentales comme l'énergie noire, la formation des premières étoiles, ou encore la structure à grande échelle de l'univers.
ENTREVISTA DAVID COBEÑO, DIRECTOR DEPORTIVO DEL RAYOSee omnystudio.com/listener for privacy information.
El Ayuntamiento de Cobeña ha firmado un convenio con la propietaria del suelo en la que se construirán 62 viviendas públicas de precio limitado, para que los jóvenes del municipio puedan adquirirlas de forma prioritaria.
Noticias del día. Vinicius y el interés de Arabia Saudí. Renovación de Pau Cubarsí. Juicio a Rubiales. Europa y Conference League. Copa del Rey de baloncesto. Entrevista a Cobeño, director deportivo del Rayo Vallecano.
El director deportivo del Rayo Vallecano ha repasado la actualidad del club en Deportes COPE. Ha reconocido que ha habido interés desde Arabia por algún jugador, ha hablado de cómo está Raúl de Tomás y ha confesado que están intentando comprar a Batalla: "Sería histórico jugar Europa con el Rayo en el año del Centenario"
Send us a textIn this episode, we sit down with Angel Mathis, a nurse practitioner, financial strategist, and world traveler who cracked the code to early retirement at just 35 years old. Angel shares why the real problem isn't burnout—it's entrapment—and how shifting your financial strategy can unlock true freedom from the bedside.We cover it all:✅ Burnout vs. Entrapment – Why feeling stuck in nursing is the real issue (and how to escape it)✅ How Angel Retired at 35 – The financial game plan that let her leave nursing (and why she came back—on her terms)✅ Short-Term vs. Long-Term Wealth – The two paths every nurse should build simultaneously✅ Breaking Free from the System – Why nurses get commodified—and how to stop playing the hospital's game✅ Life After Nursing – Walking the Pacific Crest Trail, sailing the world, and creating wealth beyond bedside shiftsAngel isn't just talking theory—she's lived it. Whether you're a new grad feeling trapped, a burned-out nurse searching for options, or someone ready to make money work for YOU, this episode is packed with real talk, practical strategies, and career-changing insights.
Was macht richtig gute User Experience aus? Und warum ist sie heute wichtiger denn je? In dieser Episode sprechen wir mit Felix van de Sand, Co-Founder der UX-Agentur COBE, über alles, was modernes UX-Design prägt – von Personalisierung bis hin zu künstlicher Intelligenz. Felix zeigt, wie Marken es schaffen, digitale Erlebnisse so zu gestalten, dass sie individuell begeistern, und warum Daten, Design und Emotionen dabei eine zentrale Rolle spielen. Wir diskutieren, welche UX-Trends 2025 unverzichtbar werden, wie KI UX-Strategien verändert und warum Barrierefreiheit für digitale Produkte nicht verhandelbar ist. Außerdem gibt Felix praktische Insights aus seiner Arbeit: Wie vermeidet man typische UX-Fails? Und wie verbindet man Markenwerte mit einem echten Wow-Faktor für die Nutzer:innen? Mit der UXi-Methode von COBE gelingt es, digitale Produkte zu bauen, die mehr können als nur gut auszusehen. Diese Folge ist perfekt für alle, die verstehen wollen, wie sie mit smarter UX nicht nur im Jetzt, sondern auch in der Zukunft punkten können. Also los geht´s! Hör rein!!
"DOME with bamfomania" is the greatest freestyle-rap/comedy podcast IN THE WORLD. If the beat drops while you're talking about it... you gotta rap about it. This week, we are joined by Cobe Jones, a hip-hop artist from Los Angeles. We talk about Judaism, touring, Matisyahu, Johnny Somali, ghostwriting, record deals, The Substance movie, and more. Also freestyles! If you would like to support the show, get access to episodes early, bonus episodes, and other content weekly, sign up at https://patreon.com/DOMEwithbamfomania Beats & Links: https://docs.google.com/document/d/1rVGs71YXm2ZGy4Wls6aQcn-v5uH0CPoy--2aMgniJNw/edit?usp=sharing Follow us on Instagram: https://www.instagram.com/cobejonestv/ https://www.instagram.com/bamfomania/ https://www.instagram.com/sultansatire/ https://www.instagram.com/bubbawhyy/ Listen to "DOME with bamfomania" on all podcast platforms: https://podcasts.apple.com/gb/podcast/dome-with-bamfomania/id1601495349 https://open.spotify.com/show/2IMnymbj1RU5U0NVXYLH9T?si=3ffba705f3a24e8f https://soundcloud.com/bamfdome Listen to bamfomania music on Spotify: https://open.spotify.com/artist/1w5Z3rwfh4BOU78BKZgFbk?si=rQB7uhH_SKmYrzYyI_Kvkg Listen to Sultan Satire music on Spotify: https://open.spotify.com/artist/4fvxByDc6w4Q49dcl9AKYS?si=LWa1-oSnQYmVZB1_qTKzTg If you enjoy this content please like, comment, subscribe and share
In this issue, we publish two new projects of CoBe : the eco-district Rouget de l'Isle in Poissy and the urban redevelopment of a fortified district, and more specifically that of the old Fort d'Aubervilliers. Both are located in Ile-de-France and not so far from Paris, France.Read by EstherImage teaser DR © CoBeSound engineering : Bastien Michel___If you like the podcast do not hesitate:. to subscribe so you don't miss the next episodes,. to leave us stars and a comment :-),. to follow us on Instagram @comdarchipodcast to find beautiful images, always chosen with care, so as to enrich your view on the subject.Nice week to all of you ! Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Dans cette suite de l'interview de l'agence CoBe (ici volet 2), nous entrons plus fortement dans la matière du projet architectural, métropolitain, dans la ruralité aussi. Quels retours des habitants, des usagers sur le travail des concepteurs ? Est-il envisageable aujourd'hui de faire sans? L'attention portée à l'usage permettra-t-elle de construire un monde meilleur? Ici sont évoqués des projets très sensibles comme celui de la Grande Borne à Grigny, mais aussi le climat en Grèce, où la place de la Concorde. Quel avenir allons-nous offrir à nos enfants ? Des projets d'architecture qui sont plus des projets de communication ? Ou bien s'agit-il d'une manière de penser réellement l'altérité? Un constat : il faut sortir des préjugés... Une discussion passionnante avec Alexandre Jonvel, Luc Monéger, Mathieu Galard à retrouver ici. Portraits teaser DR © B & WInnierie son : Bastien Michel____Si le podcast COM D'ARCHI vous plaît n'hésitez pas :. à vous abonner pour ne pas rater les prochains épisodes,. à nous laisser des étoiles et un commentaire, :-),. à nous suivre sur Instagram @comdarchipodcast pour retrouver de belles images, toujours choisies avec soin, de manière à enrichir votre regard sur le sujet.Bonne semaine à tous ! Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
ENTREVISTA A DAVID COBEÑO, DIRECTOR DEPORTIVO DEL RAYO VALLECANO (11/10/2024)See omnystudio.com/listener for privacy information.
In this issue, after a brief presentation of the agency, you'll find two of CoBe's flagship projects with a strong urban planning dimension: the Village des Athlètes in Saint-Ouen and the Pôle Laherrère in Pau, France. A project in the Ile-de-France region (mentioned in the teaser audio) and a project in the French region (teaser image), both of which also use wood to make a strong commitment to the climate.Read by EstherImage teaser DR © Luc BoeglySound engineering : Bastien Michel___If you like the podcast do not hesitate:. to subscribe so you don't miss the next episodes,. to leave us stars and a comment :-),. to follow us on Instagram @comdarchipodcast to find beautiful images, always chosen with care, so as to enrich your view on the subject.Nice week to all of you ! Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
"Fondée en 2002 par les architectes et urbanistes Alexandre Jonvel, Raphaël Denis et Martin Lemerre, CoBe s'est étoffé d'un associé paysagiste, Luc Monéger en 2010 et d'un directeur technique associé, Fabrice Taillandier en 2017. Depuis 2021 l'agence élargit encore un peu plus son champ d'actions en célébrant l'ouverture de son pôle design, et continue son expansion dans l'Ouest et le Sud-Est de la France, ainsi qu'en Espagne.Ainsi, l'agence CoBe regroupe les métiers de l'architecture, de l'urbanisme, du paysage, de la conduite de chantier et du design. Elle intègre dans les créations qu'elle dessine la capacité à corriger les ruptures, à créer des liens nouveaux, à regarder l'avenir par le biais de ses compétences croisées, avec bienveillance et confiance."L'agence CoBe, pluridisciplinaire, compte dans le paysage français d'aujourd'hui et même au delà de nos frontières. Elle témoigne au micro de Com d'Archi à travers les voix d'Alexandre Jonvel, Luc Monéger et Mathieu Galard. Dans cette première partie de l'interview il est question des parcours, des métiers, des implantations et des projets : une pratique du projet en constellation, à toutes les échelles, et surtout d'une efficacité impressionnante.Portraits teaser DR © B & WInnierie son : Bastien Michel____Si le podcast COM D'ARCHI vous plaît n'hésitez pas :. à vous abonner pour ne pas rater les prochains épisodes,. à nous laisser des étoiles et un commentaire, :-),. à nous suivre sur Instagram @comdarchipodcast pour retrouver de belles images, toujours choisies avec soin, de manière à enrichir votre regard sur le sujet.Bonne semaine à tous ! Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Send us a Text Message.Join Josh, Jenn, and Colin as they dive into the world of real estate investing while balancing their demanding careers in nursing. This episode covers the essential steps to landing your first real estate deal, including the importance of finding the right partner, avoiding common mistakes, and leveraging your network. The team also shares personal stories of how they got started, highlighting the power of teamwork and determination. If you're a nurse exploring real estate or just looking to balance a busy professional life with smart investments, this episode is for you!Timestamps:(00:00) Highlights & Quick Recap(01:30) Introduction: Partnerships in Real Estate and Building a Team(04:00) Getting Started: Finding Your "Cheerleader" for That First Deal(07:00) Learning the Hard Way: When Greed and Impatience Take Over(10:20) Jenn's Journey: From Taking Notes to Managing Properties(13:00) Overcoming Challenges: Balancing Real Estate and Nursing Careers(15:40) Networking 101: Why Surrounding Yourself with the Right People Matters(18:50) Real Estate Strategies: Cash Investor vs. Active Manager(22:00) The Benefits of Partnering with Friends and Family in Real Estate(24:30) Building Your Confidence: How Partnerships Make the Risk Feel Smaller(27:00) The Importance of Real Estate Education: Podcasts, Books, and Meetups(30:00) Optimizing Your Investments: How to Manage Growing Portfolios(33:20) Final Thoughts: Focus on Strengths, Learn from Mistakes, and Keep GrowingAbout the Podcast:Investing RN merges the world of nursing with real estate investing, offering healthcare professionals practical advice on how to achieve financial freedom. We focus on giving nurses the tools they need to balance their careers while growing wealth through real estate. Whether you're managing long shifts or investing in properties, our goal is to provide you with the inspiration and knowledge to make smart, confident financial decisions.Follow Us:
In this episode of The Light Watkins Show, Light Watkins sits down with Cobe Williams, a true example of transformation and redemption. Cobe's story is one of overcoming incredible odds and breaking free from a life of violence. Growing up on the South Side of Chicago, Cobe followed in the footsteps of his father, a gang member, and spent years in and out of prison. But his journey didn't end there.After multiple incarcerations, Cobe decided to turn his life around, inspired by the desire to be a better father for his son. Today, Cobe is a professional peacemaker, working with Cure Violence Global to mediate conflicts, mentor at-risk youth, and prevent violence in inner-city communities. His work is a powerful reminder that it's never too late to change your path and make a positive impact on the world.In this conversation, Light and Cobe dive deep into the moments that shaped Cobe's life, from his early experiences with his father to the pivotal moment that set him on a new course. They discuss the challenges of life after prison, the importance of building trust within communities, and the incredible power of redemption.Listeners will learn how Cobe uses his past experiences to connect with young people, helping them see that there's a way out of the cycle of violence. This episode offers practical insights into how anyone can inspire change in their own lives and in the lives of others. Whether you're interested in community work, personal growth, or simply hearing a powerful story of redemption, this episode is not to be missed.Send us a text message. We'd love to hear from you!
Sue Stockdale talks to Cobe Williams, the Director of U.S. programming for Cure Violence Global, as he reflects on his upbringing in Chicago, his exposure to street and gang life, and the impact of violence in his community. From childhood memories of block parties to the absence of a father figure leading him to the streets, Kobe shares poignant moments that shaped his life and perspective on the importance of addressing violence in communities. Learn how Cobe turned his life around to focus on community work, relationships, violence prevention, and the impact of the Cure Violence Global model. About Cobe WilliamsRicardo "Cobe" Williams's journey from the depths of gang life to becoming an international symbol of peace is nothing short of remarkable. His life story reads like a screenplay - born into the notorious Black Disciples, a childhood marred by the brutal murder of his father, and years spent navigating the treacherous waters of gang leadership. Yet, his astonishing turnaround - from gang leader to award-winning peacekeeper and community activist - offers a blueprint for social reform worldwide.His work has been celebrated across media outlets like People Magazine and he has earned many accolades, including the Hero Award from Phillip Zimbardo and the United Nations Peace Award. Cobe serves as Director of US Programs for Cure Violence Global, overseeing training and technical assistance for more than 50 sites across more than 20 cities. Cobe travels the globe training violence interrupters in mediation and conflict resolution strategies. Connect with Cobe Williams via Interrupt The Violence.com Key Quotes“It's important to listen and get to know people and build relationships.“ A lot of youth feel nobody listen to them."“ A lot of times people, on the news when somebody got shot or somebody got killed or whatever, they always say it's gang related. That's not true. A lot of this violence is interpersonal.""I can help save somebody's life. It feels good to know I saved somebody from getting shot and getting killed."“People don't just wake up and say they want to do something bad to somebody, right? But a lot of times people they don't know how to ask for help.""Meet people where they are. Don't judge nobody because you never know what they've been through and what they're going through.""It's not how you start, but it's how you finish."Time Stamps[03:16] Turning point in court.[06:40] Interpersonal violence beyond gangs.[10:48] Credibility and community impact.[12:44] Sports and community building.[17:56] Overcoming struggles and inspiring others.[20:47] Building relationships and understanding.Connect with Access to Inspiration: Twitter | Facebook | Instagram | LinkedIn | Read our Impact Report and if you would like to support us then Buy Me A CoffeeProducer: Sue Stockdale Sound Editor: Matias De EzcurraBecome a supporter of this podcast: https://www.spreaker.com/podcast/access-to-inspiration--4156820/support.
Send us a Text Message.Welcome to another roundtable discussion with the Investing RN team—Josh, Jenn, and Colin. The team opens up about the benefits and drawbacks of travel nursing, highlighting how travel nurses often jump straight into work with minimal training, which can be both a blessing and a challenge. They discuss the financial and operational impact of nursing strikes and unions. The team shares their struggles and strategies for balancing the demanding schedules of nursing with personal responsibilities, like managing energy levels and adjusting to life with a newborn. Jenn shares her experiences with training in the cath lab, while Josh and Colin talk about their approaches to maintaining work-life balance despite their hectic schedules.On the real estate front, the team provides exciting updates on their latest ventures. They discuss their decision to make their first U.S.-based hire for property management, a strategic move to help them scale their portfolio. The hosts explain the importance of optimizing current investments, detailing the processes involved in filling empty mobile home lots and the financial impact of these additions. They also touch on their goals for the future, including plans for expanding their real estate holdings and enhancing their community-building efforts.Whether you're a nurse looking to invest or an investor interested in the healthcare industry, this episode offers valuable insights and practical advice.Timestamps:(00:00) Highlight(01:35) Introduction to the Team Roundtable(04:25) Balancing Energy Drinks, Sleep, and Parenthood(06:50) The Reality of Nursing Strikes: Insights from Rady Children's Hospital(13:30) Financial and Operational Impact of Using Travel Nurses(15:40) Challenges and Strategies in Real Estate Investing(24:10) Real Estate Updates: New Hire and Property Management(33:45) The Importance of Optimizing Current Investments(40:20) Personal Experiences and Insights on Real Estate Management(51:50) Final Thoughts and Future PlansAbout Investing RN Podcast: Welcome to the Investing RN Podcast, where we empower nurses and healthcare professionals to navigate the world of finance with confidence. Our mission is simple: to provide the tools, knowledge, and inspiration necessary for nurses to make informed financial decisions and build wealth. We believe that financial literacy is essential for creating a life of abundance, security, and freedom. Through our podcast, we aim to equip nurses with the resources they need to invest their time, money, and relationships wisely. Join us as we help nurses take control of their finances and create a future they love.Follow Us On Socials:
Interrupting Violence: One Man's Journey to Heal the Streets and Redeem Himself follows Cobe as he undertakes his redemption journey, offering new hope for the nation's most violent communities. As the country wrestles with the inequities exposed by the coronavirus pandemic and the complex intersections of urban violence, racial injustice, police brutality, and poverty in the aftermath of George Floyd's murder, this book provides an inspiring blueprint. Cobe's story demonstrates how the country can resolve the issues plaguing our inner cities, taking readers into an often misunderstood and misrepresented aspect of the Black experience in America. Released July 2, 2024 through Roman Littlefield Publishing.The Amazon link is: https://www.amazon.com/Interrupting-Violence-Journey-Streets-Himself/dp/1538166879/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr=And the book website is: https://www.interruptingviolence.com/Support this show http://supporter.acast.com/insight-to-action-inspirational-insights-podcast. Hosted on Acast. See acast.com/privacy for more information.
Clean eating, dirty sodas, Asian street food, sea vegetables and souped-up snacks for a nation of noshers- these are all items you may be seeing on more restaurant menus, observes Pat Cobe, Senior Editor for Restaurant Business Magazine. In her long career covering the foodservice industry, Cobe has reported on menu restaurant and development and other industry topics. She edits the weekly On the Menu Newsletter and co-hosts the Menu Talk podcast with Brett Thorn.The Connected Table is broadcast live Wednesdays at 2PM ET and Music on W4CY Radio (www.w4cy.com) part of Talk 4 Radio (www.talk4radio.com) on the Talk 4 Media Network (www.talk4media.com).The Connected Table Podcast is also available on Talk 4 Media (www.talk4media.com), Talk 4 Podcasting (www.talk4podcasting.com), iHeartRadio, Amazon Music, Pandora, Spotify, Audible, and over 100 other podcast outlets.
Bill Sherman hosts Cobe Williams, a pioneer in violence prevention. Cobe shares his journey from personal redemption to global impact. As Director of US Programs for Cure Violence Global, he uses epidemic control methods to halt violence in communities. His story is a powerful testament to transformation and hope. Cobe's defining moment came in court, in restraints, when he embraced his son after a year and a half. This emotional reunion catalyzed his resolve to change his life and be there for his family. Today, Cobe leads violence prevention programs worldwide, from Chicago to Africa, transforming communities and inspiring change. With almost two decades in the field, Cobe began as a "Violence Interrupter," mediating conflicts in his neighborhood. His approach is deeply relational, meeting people where they are, and spreading the message that disagreements don't have to escalate to violence. His work involves intense listening, strategic mediation, and leveraging community relationships to prevent retaliation and promote peace. Cobe also discusses his upcoming book, "Interrupting Violence: One Man's Journey to Heal the Streets and Redeem Himself," co-authored with Josh Gryniewicz. The book aims to inspire others by sharing Cobe's personal journey and the lessons he's learned in violence prevention. Cure Violence Global's success attracts mayors, health departments, and private funders seeking proven methods to reduce violence. Cobe emphasizes the importance of community credibility and relationship-building in their programs, ensuring local leaders are respected and effective. Through his story, Cobe hopes to show that it's never too late to change, and redemption is possible for everyone. His work and message aim to foster understanding, reduce judgment, and inspire others to share their stories and seek help. Three Key Takeaways: • Transformative Power of Personal Redemption: Cobe Williams' journey from a courtroom revelation to leading global violence prevention efforts illustrates how personal transformation can drive impactful change. His story emphasizes that it's never too late to change and that redemption is possible for everyone. • Community-Centric Violence Prevention: Cure Violence Global's approach leverages deep community relationships and epidemic control methods to interrupt and prevent violence. By training and supporting local leaders who are respected and credible within their communities, the program effectively reduces shootings and killings. • Inspiring Others Through Storytelling: Cobe's upcoming book, "Interrupting Violence: One Man's Journey to Heal the Streets and Redeem Himself," aims to motivate and inspire others. By sharing his personal experiences and the challenges he faced, he hopes to encourage people to understand the roots of violence, reduce judgment, and inspire others to seek help and share their stories.
LifeBlood: We talked about addressing systemic violence, the causes of it, how it's a public health issue, attacking the problem from the top down as well as the ground up, the necessity of positive role models, and how to solve the problem, with Cobe Williams, Award-winning peacekeeper, community violence pioneer, speaker, and author. Listen to how to start making a difference! You can learn more about Cobe at InterruptTheViolence.com, and LinkedIn. Get your copy of Interrupt the Violence here: https://amzn.to/3Vw5XGP Thanks, as always for listening! If you got some value and enjoyed the show, please leave us a review here: https://ratethispodcast.com/lifebloodpodcast You can learn more about us at LifeBlood.Live, Twitter, LinkedIn, Instagram, YouTube and Facebook or you'd like to be a guest on the show, contact us at contact@LifeBlood.Live. Stay up to date by getting our monthly updates. Want to say “Thanks!” You can buy us a cup of coffee. https://www.buymeacoffee.com/lifeblood
Whoa! Ladies and gentlemen, I'm pleased to present Mr. J. Cobe! A man who is, if not prodigious, is at least prodigiously loquacious! I truly enjoy being in J's company. He is a writer, a storyteller, a builder, a helper, a man quixotically fixated on leading a weird life, squares be damned! I hope you enjoy it, I sure did. Share and Enjoy!Tunes in this EpisodeOscar Aleman - DelicadoFred Wesley & the New J.B'.s - Breakin' Bread Hosted on Acast. See acast.com/privacy for more information.
SOLO SESH: Class is in SESSION! To celebrate the launch of the Monthly Manifestation Journal we're kicking off PART TWO of the Manifestation Master Class on the podcast
On this episode I'm speaking with Nate Jenkins, Principal at OZ Architecture in Denver, Colorado. Nate has over 25 years of experience and a distinguished portfolio of work spanning the globe. He's passionate about working with clients to develop projects with a high level of design while staying true to the environment and the communities within which their projects are sited. Nate has a deep passion for improving design through process, designing and developing projects that make a positive contribution and is not afraid to push boundaries and make change happen. His experience includes millions of square feet of mixed-use, commercial/retail, office, resorts and hospitality, K-12, Higher-ed, and multi-family projects. Related links for this episode: · OZ Architecture - https://ozarch.com/ · Nate on LinkedIn - https://www.linkedin.com/in/nate-jenkins-308093a/ · Nate on Instagram - https://www.instagram.com/natejjenkins/ · Ace Hotel / Alex Calderwood - https://www.fastcompany.com/3031200/alexander-calderwood-never-stop · Paper Island - https://www.cobe.dk/projects/paper-island · Cobe - https://www.cobe.dk/ · Good to Great (book) - https://amzn.to/4c3YWU3 Be sure to support this podcast by subscribing and reviewing! Get on the list at https://transformingcities.io for future announcements. Brought to you by Authentic: https://authenticff.com © 2024 Authentic Form & Function
SOLO SESH: SURPRISE!!! The Monthly Manifestation Journals are HERE! And to celebrate we're kicking off a Manifestation Master Class on the podcast
El próximo día 29 de mayo, el Rayo Vallecano cumplirá 100 años de historia y lo hará en Primera División. Después de una temporada marcada por la irregularidad y el inicio de una nueva era sin Andoni Iraola en el banquillo, el conjunto franjirrojo arrancó el curso con Francisco en el banquillo y lo ha acabado con un Íñigo Pérez que ha salvado a su equipo en la jornada 37. Una vez logrado el objetivo es momento de hacer balance y mirar hacia el futuro. David Cobeño, el arquitecto de un nuevo Rayo de Primera, ha pasado por los micrófonos de 'Marcador' y ha analizado todos los temas de actualidad del equipo de Vallecas.See omnystudio.com/listener for privacy information.
This week I'm back with a solo sesh and chatting all things ✨happy hormones✨ On today's episode I'm unraveling the complexities of hormone dysregulation and dishing out insights from my recent lab tests. Plus I'm sharing my strategies for stabilizing blood sugar and optimizing wellness through cycle syncing +++ tips to boost those happy hormones!We're chitchatting:Hormone health 101Current nutrition protocolHormone dysregulationLab workBlood Sugar StabilizationCurrent wellness practicesCycle syncing Mentioned Resources:Is This Normal? Dr. Jolene BrightonMay Journal Prompts HERE
This week I'm BACK with a solo sesh and chatting all things ✨collective transformation✨ Spring is officially in the air and we're diving deep into death and rebirth cycles and what lifestyle changes I've has been implementing to help get me out of a winter slump. We're chitchatting:spring resetwinter slumpreleasing energetic cordsfacing fears & shadowsleaning on spiritual supportastrological new yearnavigating eclipse seasonLTTHMMLBL (little things that have made my life better lately)bare minimum eraApril Journal Prompts HERE
Interrupting Violence follows Cobe Williams as he undertakes this redemption journey, offering new hope for the nation's most violent communities. Cobe takes readers into an often misunderstood and misrepresented aspect of the Black experience in America. As the country wrestles with the inequities exposed by the coronavirus pandemic and the complex intersections of urban violence, racial issues, police brutality, and poverty in the aftermath of George Floyd's murder, this book provides an inspiring blueprint. Cobe's story demonstrates how the country can resolve the issues plaguing our inner cities. Josh Gryniewicz co-authors this book.Welcome back to That Entrepreneur Show! If you enjoy the show, please subscribe for weekly episodes and rate the show 5 stars to help others also join our conversations! These entrepreneurs joined the Writing with Authors series, so there is a FULL VIDEO of the interview. Check it out here: https://www.youtube.com/watch?v=V_NBtoROtC4&list=PLat9MDCRaOgKnpjC528PB3BxtG9snH6aa&index=2 Support the showIf you enjoyed this week's show, click the subscribe button to stay current.Listen to A Mental Health Break Episodes hereTune into Writing with Authors here
Friendship breakup was the best thing that ever happened to me. File THAT under things I recently said that shocked me. This week it's just you and me baby, and we're diving deep into relationship building 101. More specifically, how to make friends and grow community wherever you're at. Whether or not you're in transition, moved to a new city, or just in need to hit refresh on your relationships this episode is one you won't want to miss.On this episode we're chatting:circles of closenessfriendship breakupsmaking friendsgrowing communityuplevlingyour "top 5" taking relationship inventoryMentioned ResourcesRockbottom to Uplevel: Listen Here
This week I'm coming to you with a solo sesh and we're talking all about the benefits of having a monthly manifestation process. I kickoff today's sesh in detail with my trip to Costa Rica, and how being in a blue zone is inspiring new pleasure practices. I'm also dishing my February archetype and current manifestations along with an update on the product I'm in the midst of developingOn this episode we're chatting:monthly manifestation processfeb monthly archetypecurrent manifestationstrip to costa ricaquantum leapsproduct updatebedside table tourMentioned Resourcesmonthly archetype episode: Craft A Monthly Archetype
Artist: Cobe-Cobe (United Kingdom) Name: Phangan Sunship (February 2024) Genre: Organic House Release Date: 04.02.2024 Exclusive: Deep House Moscow Cobe-Cobe: https://soundcloud.com/cobecobemusic Instagram: https://www.instagram.com/cobecobe__ CONTACT (DHM): Email — deephousemoscow@hotmail.com Follow us: www.facebook.com/deephousemsk/ www.instagram.com/deephousemoscow/ vk.com/deephousemsk/
Stone's Shenanigans Ep. 33 - Christopher Boom and Cobe Berglund by KBVU 97.5 The Edge
Das Universum hat einen kalten Fleck. Beziehungsweise eine Region, die kälter ist, als sie sein sollte. Was da abgeht und was das mit dem Urknall selbst zu tun hat, erfahrt ihr in der neuen Folge der Sternengeschichten: https://astrodicticum-simplex.at/?p=36793 Wer den Podcast finanziell unterstützen möchte, kann das hier tun: Mit PayPal (https://www.paypal.me/florianfreistetter), Patreon (https://www.patreon.com/sternengeschichten) oder Steady (https://steadyhq.com/sternengeschichten) Zur ausverkauften "Sternengeschichten Live"-Show gibt es am 29. März einen Zusatztermin: https://schwarzkaue-herten.de/veranstaltung/sternengeschichten-die-live-premiere-in-unserem-spiralarm-der-milchstrasse-2// Am 24. März gibt es auch in der Schwarzkaue Herten die Liveshow von Folge 100 vom Podcast "Das Universum": https://schwarzkaue-herten.de/veranstaltung/das-universum-wird-100-jubilaeums-gala-2/
This week I'm coming to you with a solo sesh and we're talking all things building Pleasure Practices. I kickoff today's sesh in detail with my first psychic experience (it was amazing!!!), and how you can begin connecting with your ancestral lineage to harness your healing and inner-knowing system. We then unpack the principles behind establishing a Pleasure Practice and how you can utilize the magic of play & curiosity to step into your best (most aligned) self! On this episode we're chatting:my psychic experienceconnecting with spiritsfollowing your intuitionfreedom > fearmasculine & feminine energeticssurrendering to pleasurepleasure practices 101Mentioned Resourcesmonthly archetype episode: Craft A Monthly Archetype
It's that time of year!!! Fall is in full swing and I couldn't help but create a super special episode for all my spiritually inclined, manifesting, spooky season cuties! In this Halloween-themed episode I'm chatting with Leanne Marama, Psychic Medium & the founder of Pentagram Witchcraft Shoppe in Salem, Massachusetts about ALL things witchcraft, Salem, and the power of embracing the divine feminine. On this episode we chat:Salem Witch TrialsModern day witchcraftNurturing intuitionSpirit guides 101Divine feminineSeeking signsCommunicating with the deceasedEvolution of spiritualityHalloween rituals
Thanks to the over 17,000 people who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out the State of AI Engineering survey! See our Community page for upcoming meetups in SF, Paris and NYC.This episode had good interest on Twitter.Fast.ai's “Practical Deep Learning” courses been watched by over >6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn't always the case. Being non-consensus and right In 2018, Jeremy and Sebastian Ruder published a paper on ULMFiT (Universal Language Model Fine-tuning), a 3-step transfer learning technique for NLP tasks: The paper demonstrated that pre-trained language models could be fine-tuned on a specific task with a relatively small amount of data to achieve state-of-the-art results. They trained a 24M parameters model on WikiText-103 which was beat most benchmarks.While the paper had great results, the methods behind weren't taken seriously by the community: “Everybody hated fine tuning. Everybody hated transfer learning. I literally did tours trying to get people to start doing transfer learning and nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning […] which I was convinced was not the right direction, but who's going to listen to me, cause as you said, I don't have a PhD, not at a university… I don't have a big set of computers to fine tune huge transformer models.”Five years later, fine-tuning is at the center of most major discussion topics in AI (we covered some like fine tuning vs RAG and small models fine tuning), and we might have gotten here earlier if Jeremy had OpenAI-level access to compute and distribution. At heart, Jeremy has always been “GPU poor”:“I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use.”This story is a good reminder of how some of the best ideas are hiding in plain sight; we recently covered RWKV and will continue to highlight the most interesting research that isn't being done in the large labs. Replacing fine-tuning with continued pre-trainingEven though fine-tuning is now mainstream, we still have a lot to learn. The issue of “catastrophic forgetting” and potential solutions have been brought up in many papers: at the fine-tuning stage, the model can forget tasks it previously knew how to solve in favor of new ones. The other issue is apparent memorization of the dataset even after a single epoch, which Jeremy covered Can LLMs learn from a single example? but we still don't have the answer to. Despite being the creator of ULMFiT, Jeremy still professes that there are a lot of open questions on finetuning:“So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do.”He now advocates for "continued pre-training" - maintaining a diversity of data throughout the training process rather than separate pre-training and fine-tuning stages. Mixing instructional data, exercises, code, and other modalities while gradually curating higher quality data can avoid catastrophic forgetting and lead to more robust capabilities (something we covered in Datasets 101).“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data….So yeah, that's now my view, is I think ULMFiT is the wrong approach. And that's why we're seeing a lot of these so-called alignment tax… I think it's actually because people are training them wrong.An example of this phenomena is CodeLlama, a LLaMA2 model finetuned on 500B tokens of code: while the model is much better at code, it's worse on generic tasks that LLaMA2 knew how to solve well before the fine-tuning. In the episode we also dive into all the places where open source model development and research is happening (academia vs Discords - tracked on our Communities list and on our survey), and how Jeremy recommends getting the most out of these diffuse, pseudonymous communities (similar to the Eleuther AI Mafia).Show Notes* Jeremy's Background* FastMail* Optimal Decisions* Kaggle* Enlitic* fast.ai* Rachel Thomas* Practical Deep Learning* fastai for PyTorch* nbdev* fastec2 (the underrated library we describe)* Can LLMs learn from a single example?* the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”.* Sebastian Ruder* Alec Radford* Sylvain Gugger* Stephen Merity* Chris Lattner* Modular.ai / Mojo* Jono Whittaker* Zeiler and Fergus paper* ULM Fit* DAWNBench* Phi-1* Code Llama* AlexNetTimestamps* [00:00:00] Intros and Jeremy's background* [00:05:28] Creating ULM Fit - a breakthrough in NLP using transfer learning* [00:06:32] The rise of GPT and the appeal of few-shot learning over fine-tuning* [00:10:00] Starting Fast.ai to distribute AI capabilities beyond elite academics* [00:14:30] How modern LMs like ChatGPT still follow the ULM Fit 3-step approach* [00:17:23] Meeting with Chris Lattner on Swift for TensorFlow at Google* [00:20:00] Continued pre-training as a fine-tuning alternative* [00:22:16] Fast.ai and looking for impact vs profit maximization* [00:26:39] Using Fast.ai to create an "army" of AI experts to improve their domains* [00:29:32] Fast.ai's 3 focus areas - research, software, and courses* [00:38:42] Fine-tuning memorization and training curve "clunks" before each epoch* [00:46:47] Poor training and fine-tuning practices may be causing alignment failures* [00:48:38] Academia vs Discords* [00:53:41] Jeremy's high hopes for Chris Lattner's Mojo and its potential* [01:05:00] Adding capabilities like SQL generation through quick fine-tuning* [01:10:12] Rethinking Fast.ai courses for the AI-assisted coding era* [01:14:53] Rapid model development has created major technical debt* [01:17:08] Lightning RoundAI Summary (beta)This is the first episode we're trying this. Here's an overview of the main topics before you dive in the transcript. * Jeremy's background and philosophies on AI* Studied philosophy and cognitive science in college* Focused on ethics and thinking about AI even 30 years ago* Believes AI should be accessible to more people, not just elite academics/programmers* Created fast.ai to make deep learning more accessible* Development of transfer learning and ULMFit* Idea of transfer learning critical for making deep learning accessible* ULMFit pioneered transfer learning for NLP* Proposed training general language models on large corpora then fine-tuning - this became standard practice* Faced skepticism that this approach would work from NLP community* Showed state-of-the-art results on text classification soon after trying it* Current open questions around fine-tuning LLMs* Models appear to memorize training data extremely quickly (after 1 epoch)* This may hurt training dynamics and cause catastrophic forgetting* Unclear how best to fine-tune models to incorporate new information/capabilities* Need more research on model training dynamics and ideal data mixing* Exciting new developments* Mojo and new programming languages like Swift could enable faster model innovation* Still lots of room for improvements in computer vision-like innovations in transformers* Small models with fine-tuning may be surprisingly capable for many real-world tasks* Prompting strategies enable models like GPT-3 to achieve new skills like playing chess at superhuman levels* LLMs are like computer vision in 2013 - on the cusp of huge new breakthroughs in capabilities* Access to AI research* Many key convos happen in private Discord channels and forums* Becoming part of these communities can provide great learning opportunities* Being willing to do real work, not just talk about ideas, is key to gaining access* The future of practical AI* Coding becoming more accessible to non-programmers through AI assistance* Pre-requisite programming experience for learning AI may no longer be needed* Huge open questions remain about how to best train, fine-tune, and prompt LLMsTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:21]Swyx: Hey, and today we have in the remote studio, Jeremy Howard all the way from Australia. Good morning. [00:00:27]Jeremy: The remote studio, also known as my house. Good morning. Nice to see you. [00:00:32]Swyx: Nice to see you too. I'm actually very used to seeing you in your mask as a message to people, but today we're mostly audio. But thank you for doing the very important public service of COVID awareness. It was a pleasure. [00:00:46]Jeremy: It was all very annoying and frustrating and tedious, but somebody had to do it. [00:00:52]Swyx: Somebody had to do it, especially somebody with your profile. I think it really drives home the message. So we tend to introduce people for them and then ask people to fill in the blanks on the personal side. Something I did not know about you was that you graduated with a BA in philosophy from the University of Melbourne. I assumed you had a PhD. [00:01:14]Jeremy: No, I mean, I barely got through my BA because I was working 80 to 100 hour weeks at McKinsey and Company from 19 years old onwards. So I actually didn't attend any lectures in second and third year university. [00:01:35]Swyx: Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. [00:01:39]Jeremy: I took two weeks off before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight, I would go to all my professors and say, oh, I was meant to be in your class this semester and I didn't quite turn up. Were there any assignments I was meant to have done, whatever. I can't believe all of them let me basically have it. They basically always would say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university, but. [00:02:12]Swyx: Well, it shows that, I guess, you min-maxed the opportunities. That definitely was a precursor. [00:02:18]Jeremy: I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was ethics and cognitive science. And it's kind of really amazing that it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. [00:02:38]Swyx: A lot of, yeah, a lot of relevant conversations there. So you were a consultant for a while and then in the magical month of June 1989, you founded both Optimal Decisions and Fastmeal, which I also briefly used. So thank you for that. [00:02:53]Jeremy: Oh, good for you. Yeah. Cause I had read the statistics, which is that like 90% or something of small businesses fail. So I thought if I start two businesses, I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd, but anyway. [00:03:10]Swyx: And then you were president and chief scientist at Kaggle, which obviously is the sort of composition platform of machine learning. And then Enlitic, where you were working on using deep learning to improve medical diagnostics and clinical decisions. Yeah. [00:03:28]Jeremy: I was actually the first company to use deep learning in medicine, so I kind of founded the field. [00:03:33]Swyx: And even now that's still like a pretty early phase. And I actually heard you on your new podcast with Tanish, where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. [00:03:47]Jeremy: Maybe he's too old to be called a prodigy now, ex-prodigy. No, no. [00:03:51]Swyx: I think he still counts. And anyway, just to round out the bio, you have a lot more other credentials, obviously, but most recently you started Fast.ai, which is still, I guess, your primary identity with Rachel Thomas. So welcome. [00:04:05]Jeremy: Yep. [00:04:06]Swyx: Thanks to my wife. Thank you. Yeah. Doing a lot of public service there with getting people involved in AI, and I can't imagine a better way to describe it than fast, fast.ai. You teach people from nothing to stable diffusion in seven weeks or something, and that's amazing. Yeah, yeah. [00:04:22]Jeremy: I mean, it's funny, you know, when we started that, what was that, like 2016 or something, the idea that deep learning was something that you could make more accessible was generally considered stupid. Everybody knew that deep learning was a thing that you got a math or a computer science PhD, you know, there was one of five labs that could give you the appropriate skills and that you would join, yeah, basically from one of those labs, you might be able to write some papers. So yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it. And we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs, that seemed like A, a big waste and B, kind of dangerous. [00:05:28]Swyx: Yeah. [00:05:29]Alessio: And, you know, well, I just wanted to know one thing on your bio that at Kaggle, you were also the top rank participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you were clearly building something that you knew a lot about, which is awesome. Talking about deep learning, you created, published a paper on ULM fit, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I've read back on the paper and you turned this model, AWD LSTM, which I did the math and it was like 24 to 33 million parameters, depending on what training data set you use today. That's kind of like not even small, it's like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and what people were working towards? [00:06:32]Jeremy: Yeah, the whole thing was a contrarian take, you know. So okay, so we started Fast.ai, my wife and I, and we thought, yeah, so we're trying to think, okay, how do we make it more accessible? So when we started thinking about it, it was probably 2015 and then 2016, we started doing something about it. Why is it inaccessible? Okay, well, A, no one knows how to do it other than a few number of people. And then when we asked those few number of people, well, how do you actually get good results? They would say like, oh, it's like, you know, a box of tricks that aren't published. So you have to join one of the labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but thankfully there was Theano and rappers and particularly Lasagna, the rapper, but yeah, not much software around, not much in the way of data sets, you know, very hard to get started in terms of the compute. Like how do you get that set up? So yeah, no, everything was kind of inaccessible. And you know, as we started looking into it, we had a key insight, which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about, nobody talks about called transfer learning, where you take somebody else's model, where they already figured out like how to detect edges and gradients and corners and text and whatever else, and then you can fine tune it to do the thing you want to do. And we thought that's the key. That's the key to becoming more accessible in terms of compute and data requirements. So when we started Fast.ai, we focused from day one on transfer learning. Lesson one, in fact, was transfer learning, literally lesson one, something not normally even mentioned in, I mean, there wasn't much in the way of courses, you know, the courses out there were PhD programs that had happened to have recorded their lessons and they would rarely mention it at all. We wanted to show how to do four things that seemed really useful. You know, work with vision, work with tables of data, work with kind of recommendation systems and collaborative filtering and work with text, because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And no one was doing anything much useful with text. Everybody was talking about word2vec, you know, like king plus queen minus woman and blah, blah, blah. It was like cool experiments, but nobody's doing anything like useful with it. NLP was all like lemmatization and stop words and topic models and bigrams and SPMs. And it was really academic and not practical. But I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done cognitive science at university, where we talked a lot about the CELS Chinese room experiment. This idea of like, what if there was somebody that could kind of like, knew all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese and they were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them and then they do all their rules and then they pass back a piece of paper with Chinese back. And this room with a person in is actually fantastically good at answering any question you give them written in Chinese. You know, do they understand Chinese? And is this, you know, something that's intelligently working with Chinese? Ever since that time, I'd say the most thought, to me, the most thoughtful and compelling philosophical response is yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But you know, if it looks like a duck and acts like a duck, it's a duck, you know, or to all intents and purposes. And so I always kind of thought, you know, so this is basically a kind of analysis of the limits of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw to then generate text in response to text, it could appear to be intelligent, you know. And whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know, what I learned about the universal approximation theorem and stuff, and I started thinking like, oh, I wonder if like a neural net could ever get big enough and take in enough data to be a Chinese room experiment. You know, with that background and this kind of like interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, oh, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing, you know, they were not a popular or well-known thing, but they were a thing. But language models exist to this idea that you could train a model to fill in the gaps. Or actually in those days it wasn't fill in the gaps, it was finish a string. And in fact, Andrej Karpathy did his fantastic RNN demonstration from this at a similar time where he showed like you can have it ingest Shakespeare and it will generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia effectively, to do it quite accurately quite often? I thought, geez, it would actually have to know a lot about the world, you know, it'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways and that causes proceed effects and that, you know, when there are animals and there are people and that people can be in certain positions during certain timeframes and then you could, you know, all that together, you can then finish a sentence like this was signed into law in 2016 by US President X and it would fill in the gap, you know. So that's why I tried to create what in those days was considered a big language model trained on the entirety on Wikipedia, which is that was, you know, a bit unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure, based on our work with transfer learning and vision, that I could then suck out those latent capabilities by transfer learning, you know, by fine-tuning it on a task data set or whatever. So we generated this three-step system. So step one was train a language model on a big corpus. Step two was fine-tune a language model on a more curated corpus. And step three was further fine-tune that model on a task. And of course, that's what everybody still does today, right? That's what ChatGPT is. And so the first time I tried it within hours, I had a new state-of-the-art academic result on IMDB. And I was like, holy s**t, it does work. And so you asked, to what degree was this kind of like pushing against the established wisdom? You know, every way. Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like, oh, maybe. Everybody was like, it definitely won't work. NLP is much more complicated than vision. Language is a much more vastly complicated domain. You know, and you've got problems like the grounding problem. We know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. [00:15:10]Alessio: Jeremy, had people not tried because it was like too complicated to actually get the data and like set up the training? Or like, were people just lazy and kind of like, hey, this is just not going to work? [00:15:20]Jeremy: No, everybody wasn't lazy. So like, so the person I thought at that time who, you know, there were two people I thought at that time, actually, who were the strongest at language models were Stephen Merity and Alec Radford. And at the time I didn't know Alec, but I, after we had both, after I'd released ULM Fit and he had released GPT, I organized a chat for both of us with Kate Metz in the New York Times. And Kate Metz answered, sorry, and Alec answered this question for Kate. And Kate was like, so how did, you know, GPT come about? And he said, well, I was pretty sure that pre-training on a general large corpus wouldn't work. So I hadn't tried it. And then I read ULM Fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similar with, with Stephen, you know, I asked Stephen Merity, like, why don't we just find, you know, take your AWD-ASTLM and like train it on all of Wikipedia and fine tune it? And he's kind of like, well, I don't think that's going to really lie. Like two years before I did a very popular talk at KDD, the conference where everybody in NLP was in the audience. I recognized half the faces, you know, and I told them all this, I'm sure transfer learning is the key. I'm sure ImageNet, you know, is going to be an NLP thing as well. And, you know, everybody was interested and people asked me questions afterwards and, but not just, yeah, nobody followed up because everybody knew that it didn't work. I mean, even like, so we were scooped a little bit by Dai and Lee, Kwok Lee at Google. They had, they had, I already, I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose, large language model on a general purpose corpus. They only ever tested a domain specific corpus. And I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it was that kind of 30 years of mulling over the, the cell Chinese room experiment that had convinced me that it probably would work. I don't know. Yeah. [00:17:48]Alessio: Interesting. I just dug up Alec announcement tweet from 2018. He said, inspired by Cobe, Elmo, and Yola, I'm fit. We should have a single transformer language model can be fine tuned to a wide variety. It's interesting because, you know, today people think of AI as the leader, kind of kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years, people think of it as an overnight success, but obviously it took a while. [00:18:16]Swyx: Yeah. Yeah. [00:18:17]Jeremy: No, I mean, absolutely. And I'll say like, you know, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to, to ULM fit. You know, there was these kind of like, so there was a lot of, there was a lot of activity at the same time as ULM fits released. So there was, um, so before it, as Brian McCann, I think at Salesforce had come out with this neat model that did a kind of multitask learning, but again, they didn't create a general fine tune language model first. There was Elmo, um, which I think was a lip, you know, actually quite a few months after the first ULM fit example, I think. Um, but yeah, there was a bit of this stuff going on. And the problem was everybody was doing, and particularly after GPT came out, then everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning. Everybody hated transfer learning. And like, I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years and, and not to be honest, I mean, I'm a bit sad about this now, but I kind of got so disappointed and dissuaded by like, it felt like these bigger lab, much bigger labs, you know, like fast AI had only ever been just me and Rachel were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced was the wrong way to do it. And so, yeah, for years people were really focused on getting better at zero shot and few shots and it wasn't until, you know, this key idea of like, well, let's take the ULM fit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's fine tune on an instruction corpus. And then in step three, rather than fine tuning on a reasonably specific task classification, let's fine tune on a, on a RLHF task classification. And so that was really, that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction, but who's going to listen to me, you know, cause I, as you said, I don't have a PhD, not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So yeah, it was definitely difficult. It's always been hard. You know, it's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata and it'd be like, oh, this big bank has this huge room full of computers and they have like terabytes of data available, you know, at the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then of course, students coming out of their PhDs and stuff, that's where they want to go work because that's where they read about. And to me, it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. [00:22:16]Alessio: That's awesome. And it's great to hear, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love this conversation. And the other thing before we jump into Fast.AI, actually, a lot of people that I know, they run across a new architecture and whatnot, they're like, I got to start a company and raise a bunch of money and do all of this stuff. And say, you were like, I want everybody to have access to this. Why was that the case for you? Was it because you already had a successful venture in like FastMail and you were more interested in that? What was the reasoning? [00:22:52]Jeremy: It's a really good question. So I guess the answer is yes, that's the reason why. So when I was a teenager, I thought it would be really cool to like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word VC. And I didn't really know what any of those things were really until after we started Kaggle, to be honest. Even the way it started to what we now call startups. I just thought they were just small businesses. You know, they were just companies. So yeah, so those two companies were FastMail and Optimal Decisions. FastMail was the first kind of synchronized email provider for non-businesses. So something you can get your same email at home, on your laptop, at work, on your phone, whatever. And then Optimal Decisions invented a new approach to insurance pricing. Something called profit-optimized insurance pricing. So I saw both of those companies, you know, after 10 years. And at that point, I had achieved the thing that as a teenager I had wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mom saying to me, you must be so proud. You know, because she remembered my dream. She's like, you've done it. And I kind of reflected and I was like, I'm not proud at all. You know, like people quite liked FastMail. You know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah, I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually, I haven't really helped the world very much. You know, maybe in the insurance case I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure would have a lot of value. So, you know, I took some time off. I wasn't sure if I'd ever work again, actually. I didn't particularly want to, because it felt like, yeah, it felt like such a disappointment. And, but, you know, and I didn't need to. I had enough money. Like, I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work, they all worked ridiculously hard, you know, and constantly put themselves in extremely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, Kaggle came along and I mainly kind of did that just because it was fun and interesting to hang out with interesting people. But, you know, with Fast.ai in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know? And so we definitely saw the world going in a potentially pretty dystopian direction if the world's most powerful technology was controlled by a small group of elites. So we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've helped make it a little bit less likely. So we've done our bit. [00:26:39]Swyx: You've shown that it's possible. And I think your constant advocacy, your courses, your research that you publish, you know, just the other day you published a finding on, you know, learning that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howards, furthering your mission with, you know, you don't have to do everything by yourself is what I'm saying. No, definitely. Definitely. [00:27:10]Jeremy: You know, that was a big takeaway from like, analytic was analytic. It definitely felt like we had to do everything ourselves. And I kind of, I wanted to solve medicine. I'll say, yeah, okay, solving medicine is actually quite difficult. And I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was definitely the other piece was like, yeah, you know, can we create an army of passionate domain experts who can change their little part of the world? And that's definitely happened. Like I find nowadays, at least half the time, probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say, they say, yeah, I got my start with fast.ai. So it's definitely, I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff, which, you know, there's lots of alumni there. And they say, oh my God, I got here. And like half of the people are fast.ai alumni. So it's fantastic. [00:28:13]Swyx: Yeah. [00:28:14]Jeremy: Actually, Andre Kapathy grabbed me when I saw him at NeurIPS a few years ago. And he was like, I have to tell you, thanks for the fast.ai courses. When people come to Tesla and they need to know more about deep learning, we always send them to your course. And the OpenAI Scholars Program was doing the same thing. So it's kind of like, yeah, it's had a surprising impact, you know, that's just one of like three things we do is the course, you know. [00:28:40]Swyx: Yes. [00:28:40]Jeremy: And it's only ever been at most two people, either me and Rachel or me and Sylvia nowadays, it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to make an impact. [00:28:56]Swyx: Yeah. So just to reintroduce fast.ai for people who may not have dived into it much, there is the courses that you do. There is the library that is very well loved. And I kind of think of it as a nicer layer on top of PyTorch that people should start with by default and use it as the basis for a lot of your courses. And then you have like NBDev, which I don't know, is that the third one? [00:29:27]Jeremy: Oh, so the three areas were research, software, and courses. [00:29:32]Swyx: Oh, sorry. [00:29:32]Jeremy: So then in software, you know, fast.ai is the main thing, but NBDev is not far behind. But then there's also things like FastCore, GHAPI, I mean, dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden, actually. Some of them I should try to do a better job of telling people about. [00:30:01]Swyx: What are you thinking about? Yeah, what's on the course of my way? Oh, I don't know, just like little things. [00:30:04]Jeremy: Like, for example, for working with EC2 and AWS, I created a FastEC2 library, which I think is like way more convenient and nice to use than anything else out there. And it's literally got a whole autocomplete, dynamic autocomplete that works both on the command line and in notebooks that'll like auto-complete your instance names and everything like that. You know, just little things like that. I try to make like, when I work with some domain, I try to make it like, I want to make it as enjoyable as possible for me to do that. So I always try to kind of like, like with GHAPI, for example, I think that GitHub API is incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So like GHAPI, like FastEC2, it like autocompletes both at the command line or in a notebook or whatever, like literally the entire GitHub API. The entire thing is like, I think it's like less than 100K of code because it actually, as far as I know, the only one that grabs it directly from the official open API spec that GitHub produces. And like if you're in GitHub and you just type an API, you know, autocomplete API method and hit enter, it prints out the docs with brief docs and then gives you a link to the actual documentation page. You know, GitHub Actions, I can write now in Python, which is just so much easier than writing them in TypeScript and stuff. So, you know, just little things like that. [00:31:40]Swyx: I think that's an approach which more developers took to publish some of their work along the way. You described the third arm of FastAI as research. It's not something I see often. Obviously, you do do some research. And how do you run your research? What are your research interests? [00:31:59]Jeremy: Yeah, so research is what I spend the vast majority of my time on. And the artifacts that come out of that are largely software and courses. You know, so to me, the main artifact shouldn't be papers because papers are things read by a small exclusive group of people. You know, to me, the main artifacts should be like something teaching people, here's how to use this insight and here's software you can use that builds it in. So I think I've only ever done three first-person papers in my life, you know, and none of those are ones I wanted to do. You know, they were all ones that, like, so one was ULM Fit, where Sebastian Ruder reached out to me after seeing the course and said, like, you have to publish this as a paper, you know. And he said, I'll write it. He said, I want to write it because if I do, I can put it on my PhD and that would be great. And it's like, okay, well, I want to help you with your PhD. And that sounds great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the Fast.ai library paper, which again, somebody reached out and said, please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't, other than that, I've never written a first author paper. So the research is like, well, so for example, you know, Dawn Bench was a competition, which Stanford ran a few years ago. It was kind of the first big competition of like, who can train neural nets the fastest rather than the most accurate. And specifically it was who can train ImageNet the fastest. And again, this was like one of these things where it was created by necessity. So Google had just released their TPUs. And so I heard from my friends at Google that they had put together this big team to smash Dawn Bench so that they could prove to people that they had to use Google Cloud and use their TPUs and show how good their TPUs were. And we kind of thought, oh s**t, this would be a disaster if they do that, because then everybody's going to be like, oh, deep learning is not accessible. [00:34:20]Swyx: You know, to actually be good at it, [00:34:21]Jeremy: you have to be Google and you have to use special silicon. And so, you know, we only found out about this 10 days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next 10 days and just tried to crunch through and try to use all of our best ideas that had come from our research. And so particularly progressive resizing, just basically train mainly on small things, train on non-square things, you know, stuff like that. And so, yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, oh s**t, you know, this is going to show that you have to be Google and have TPUs to being like, oh my God, even the little guy can do deep learning. So that's an example of the kind of like research artifacts we do. And yeah, so all of my research is always, how do we do more with less, you know? So how do we get better results with less data, with less compute, with less complexity, with less education, you know, stuff like that. So ULM fits obviously a good example of that. [00:35:37]Swyx: And most recently you published, can LLMs learn from a single example? Maybe could you tell the story a little bit behind that? And maybe that goes a little bit too far into the learning of very low resource, the literature. [00:35:52]Jeremy: Yeah, yeah. So me and my friend, Jono Whittaker, basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is, can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within nine hours. And Kaggle's very, very limited. So you've only got 14 gig RAM, only two CPUs, and a small, very old GPU. So this is cool, you know, if you can do well at this, then this is a good example of like, oh, you can do more with less. So yeah, Jono and I were playing around with fine tuning, of course, transfer learning, pre-trained language models. And we saw this, like, so we always, you know, plot our losses as we go. So here's another thing we created. Actually, Sylvain Guuger, when he worked with us, created called fast progress, which is kind of like TQEDM, but we think a lot better. So we look at our fast progress curves, and they kind of go down, down, down, down, down, down, down, a little bit, little bit, little bit. And then suddenly go clunk, and they drop. And then down, down, down, down, down a little bit, and then suddenly clunk, they drop. We're like, what the hell? These clunks are occurring at the end of each epoch. So normally in deep learning, this would be, this is, you know, I've seen this before. It's always been a bug. It's always turned out that like, oh, we accidentally forgot to turn on eval mode during the validation set. So I was actually learning then, or, oh, we accidentally were calculating moving average statistics throughout the epoch. So, you know, so it's recently moving average or whatever. And so we were using Hugging Face Trainer. So, you know, I did not give my friends at Hugging Face the benefit of the doubt. I thought, oh, they've fucked up Hugging Face Trainer, you know, idiots. Well, you'll use the Fast AI Trainer instead. So we switched over to Learner. We still saw the clunks and, you know, that's, yeah, it shouldn't really happen because semantically speaking in the epoch, isn't like, it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one. So there shouldn't be a clunk, you know. So I kind of asked around on the open source discords. That's like, what's going on here? And everybody was just like, oh, that's just what, that's just what these training curves look like. Those all look like that. Don't worry about it. And I was like, oh, are you all using Trainer? Yes. Oh, well, there must be some bug with Trainer. And I was like, well, we also saw it in Learner [00:38:42]Swyx: and somebody else is like, [00:38:42]Jeremy: no, we've got our own Trainer. We get it as well. They're just like, don't worry about it. It's just something we see. It's just normal. [00:38:48]Swyx: I can't do that. [00:38:49]Jeremy: I can't just be like, here's something that's like in the previous 30 years of neural networks, nobody ever saw it. And now suddenly we see it. [00:38:57]Swyx: So don't worry about it. [00:38:59]Jeremy: I just, I have to know why. [00:39:01]Swyx: Can I clarify? This is, was everyone that you're talking to, were they all seeing it for the same dataset or in different datasets? [00:39:08]Jeremy: Different datasets, different Trainers. They're just like, no, this is just, this is just what it looks like when you fine tune language models. Don't worry about it. You know, I hadn't seen it before, but I'd been kind of like, as I say, I, you know, I kept working on them for a couple of years after ULM fit. And then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of Lama fine tuning world, where else these guys had been, you know, doing it since day one. [00:39:49]Swyx: It was only a few months ago, [00:39:51]Jeremy: but it's still quite a bit of time. So, so yeah, they're just like, no, this is all what we see. [00:39:56]Swyx: Don't worry about it. [00:39:56]Jeremy: So yeah, I, I've got a very kind of like, I don't know, I've just got this brain where I have to know why things are. And so I kind of, I ask people like, well, why, why do you think it's happening? And they'd be like, oh, it would pretty obviously, cause it's like memorize the data set. It's just like, that can't be right. It's only seen it once. Like, look at this, the loss has dropped by 0.3, 0.3, which is like, basically it knows the answer. And like, no, no, it's just, it is, it's just memorize the data set. So yeah. So look, Jono and I did not discover this and Jono and I did not come up with a hypothesis. You know, I guess we were just the ones, I guess, who had been around for long enough to recognize that like, this, this isn't how it's meant to work. And so we, we, you know, and so we went back and like, okay, let's just run some experiments, you know, cause nobody seems to have actually published anything about this. [00:40:51]Well, not quite true.Some people had published things, but nobody ever actually stepped back and said like, what the hell, you know, how can this be possible? Is it possible? Is this what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time. It's like, okay, if this hypothesis is correct, that it's memorized in the training set, then we ought to see blah, under conditions, blah, but not under these conditions. And so we ran a bunch of experiments and all of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big data set, you know, which in hindsight, it's not totally surprising because the theory, remember, of the ULMFiT theory was like, well, it's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize it as like, oh, that's just a version of this. So it's not so crazy, you know, but it is, it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because like, it doesn't even matter. Like maybe it's fine. Like maybe it's fine that it's memorized the data set after one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. [00:42:20]Swyx: That's fine. [00:42:22]Jeremy: Don't, you know, don't, I keep telling people, don't track validation loss, track validation accuracy because at least that will still be useful. Just another thing that's got lost since ULMFiT, nobody tracks accuracy of language models anymore. But you know, it'll still keep learning and it does, it does keep improving. But is it worse? You know, like, is it like, now that it's kind of memorized it, it's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do, like nobody really knows whether this memorization thing is, it's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably, yeah, I have a feeling it's messing up training dynamics as well. [00:43:13]Swyx: And does it come at the cost of catastrophic forgetting as well, right? Like, which is the other side of the coin. [00:43:18]Jeremy: It does to some extent, like we know it does, like look at Code Llama, for example. So Code Llama was a, I think it was like a 500 billion token fine tuning of Llama 2 using code. And also pros about code that Meta did. And honestly, they kind of blew it because Code Llama is good at coding, but it's bad at everything else, you know, and it used to be good. Yeah, I was pretty sure it was like, before they released it, me and lots of people in the open source discords were like, oh my God, you know, we know this is coming, Jan Lukinsk saying it's coming. I hope they kept at least like 50% non-code data because otherwise it's going to forget everything else. And they didn't, only like 0.3% of their epochs were non-code data. So it did, it forgot everything else. So now it's good at code and it's bad at everything else. So we definitely have catastrophic forgetting. It's fixable, just somebody has to do, you know, somebody has to spend their time training a model on a good mix of data. Like, so, okay, so here's the thing. Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it. [00:44:36]Jeremy: And that's because people are using it in a way different to why I created it. You know, I created it thinking the task-specific models would be more specific. You know, it's like, oh, this is like a sentiment classifier as an example of a task, you know, but the tasks now are like a, you know, RLHF, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task and it's a really cool approach. And so we see, for example, RLHF also breaks models like, you know, like GPT-4, RLHDEFT, we know from kind of the work that Microsoft did, you know, the pre, the earlier, less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me, the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter, you stop training on low quality data, because that's probably fine to forget how to write badly, maybe. So yeah, that's now my view, is I think ULM fit is the wrong approach. And that's why we're seeing a lot of these, you know, so-called alignment tacks and this view of like, oh, a model can't both code and do other things. And, you know, I think it's actually because people are training them wrong. [00:46:47]Swyx: Yeah, well, I think you have a clear [00:46:51]Alessio: anti-laziness approach. I think other people are not as good hearted, you know, they're like, [00:46:57]Swyx: hey, they told me this thing works. [00:46:59]Alessio: And if I release a model this way, people will appreciate it, I'll get promoted and I'll kind of make more money. [00:47:06]Jeremy: Yeah, and it's not just money. It's like, this is how citations work most badly, you know, so if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot and few shot learning, everybody was writing about that. Or, you know, with image generation, everybody just was writing about GANs, you know, and I was trying to say like, no, GANs are not the right approach. You know, and I showed again through research that we demonstrated in our videos that you can do better than GANs, much faster and with much less data. And nobody cared because again, like if you want to get published, you write a GAN paper that slightly improves this part of GANs and this tiny field, you'll get published, you know. So it's, yeah, it's not set up for real innovation. It's, you know, again, it's really helpful for me, you know, I have my own research lab with nobody telling me what to do and I don't even publish. So it doesn't matter if I get citations. And so I just write what I think actually matters. I wish there was, and, you know, and actually places like OpenAI, you know, the researchers there can do that as well. It's a shame, you know, I wish there was more academic, open venues in which people can focus on like genuine innovation. [00:48:38]Swyx: Twitter, which is unironically has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discords. I don't know if it's too, I don't know if it's a pusher to ask like what discords are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of Luther AI, which is a very hard bit. And, you know, like what is the new Luther? And you actually shouted out the alignment lab AI discord in your blog post. And that was the first time I even knew, like I saw them on Twitter, never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay, yeah. [00:49:23]Jeremy: And then even then, if you do know about that and you go there, it'll look like it's totally dead. And that's because unfortunately, nearly all the discords, nearly all of the conversation happens in private channels. You know, and that's, I guess. [00:49:35]Swyx: How does someone get into that world? Because it's obviously very, very instructive, right? [00:49:42]Jeremy: You could just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so like, at least, to be fair, one of Emma's bustling channels is private. [00:49:57]Swyx: I guess. [00:49:59]Jeremy: So I'm just thinking. [00:50:01]Swyx: It's just the nature of quality discussion, right? Yeah, I guess when I think about it, [00:50:05]Jeremy: I didn't have any private discussions on our discord for years, but there was a lot of people who came in with like, oh, I just had this amazing idea for AGI. If you just thought about like, if you imagine that AI is a brain, then we, you know, this just, I don't want to talk about it. You know, I don't want to like, you don't want to be dismissive or whatever. And it's like, oh, well, that's an interesting comment, but maybe you should like, try training some models first to see if that aligns with your intuition. Like, oh, but how could I possibly learn? It's like, well, we have a course, just actually spend time learning. Like, you know, anyway. And there's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit, that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join a Discord and you can hit the like participants or community or whatever button, you can see who's in it. And then you'll see at the top, who the admins or moderators or people in the dev role are. And just DM one of them and say like, oh, here's my GitHub. Well, here's some blog posts I wrote. You know, I'm interested in talking about this, you know, can I join the private channels? And I've never heard of anybody saying no. I will say, you know, Alutha's all pretty open. So you can do the Alutha Discord still. You know, one problem with the Alutha Discord is it's been going on for so long that it's like, it's very inside baseball. It's quite hard to get started. Yeah. Carpa AI looks, I think it's all open. That's just less stability. That's more accessible. [00:52:03]Swyx: Yeah. [00:52:04]Jeremy: There's also just recently, now it's research that does like the Hermes models and data set just opened. They've got some private channels, but it's pretty open, I think. You mentioned Alignment Lab, that one it's all the interesting stuff is on private channels. So just ask. If you know me, ask me, cause I've got admin on that one. There's also, yeah, OS Skunkworks, OS Skunkworks AI is a good Discord, which I think it's open. So yeah, they're all pretty good. [00:52:40]Swyx: I don't want you to leak any, you know, Discords that don't want any publicity, but this is all helpful. [00:52:46]Jeremy: We all want people, like we all want people. [00:52:49]Swyx: We just want people who like, [00:52:51]Jeremy: want to build stuff, rather than people who, and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like, here's a really small kind of task that as somebody who doesn't know anything is going to take you a really long time to do, but it would still be helpful. Then, and then you go and do it. That would be great. The truth is, yeah, [00:53:19]Swyx: like, I don't know, [00:53:20]Jeremy: maybe 5% of people who come in with great enthusiasm and saying that they want to learn and they'll do anything. [00:53:25]Swyx: And then somebody says like, [00:53:25]Jeremy: okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity. And everybody will then want to help you do more work. [00:53:41]Swyx: So yeah. [00:53:41]Jeremy: So just, yeah, just do work and people will want to support you. [00:53:47]Alessio: Our Discord used to be referral only for a long time. We didn't have a public invite and then we opened it and they're kind of like channel gating. Yeah. A lot of people just want to do, I remember it used to be like, you know, a forum moderator. [00:54:00]Swyx: It's like people just want to do [00:54:01]Alessio: like drive-by posting, [00:54:03]Swyx: you know, and like, [00:54:03]Alessio: they don't want to help the community. They just want to get their question answered. [00:54:07]Jeremy: I mean, the funny thing is our forum community does not have any of that garbage. You know, there's something specific about the low latency thing where people like expect an instant answer. And yeah, we're all somehow in a forum thread where they know it's like there forever. People are a bit more thoughtful, but then the forums are less active than they used to be because Discord has got more popular, you know? So it's all a bit of a compromise, you know, running a healthy community is, yeah, it's always a bit of a challenge. All right, we got so many more things [00:54:47]Alessio: we want to dive in, but I don't want to keep you here for hours. [00:54:50]Swyx: This is not the Lex Friedman podcast [00:54:52]Alessio: we always like to say. One topic I would love to maybe chat a bit about is Mojo, modular, you know, CrystalLiner, not many of you on the podcast. So we want to spend a little time there. You recently did a hacker's guide to language models and you ran through everything from quantized model to like smaller models, larger models, and all of that. But obviously modular is taking its own approach. Yeah, what got you excited? I know you and Chris have been talking about this for like years and a lot of the ideas you had, so. [00:55:23]Jeremy: Yeah, yeah, yeah, yeah, no, absolutely. So I met Chris, I think it was at the first TensorFlow Dev Summit. And I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know, you know, certainly nothing had been mentioned. So I, you know, I admired him from afar with LLVM and Swift and whatever. And so I saw him walk into the courtyard at Google. It's just like, oh s**t, man, that's Chris Latner. I wonder if he would lower his standards enough to talk to me. Well, worth a try. So I caught up my courage because like nobody was talking to him. He looked a bit lost and I wandered over and it's like, oh, you're Chris Latner, right? It's like, what are you doing here? What are you doing here? And I was like, yeah, yeah, yeah. It's like, oh, I'm Jeremy Howard. It's like, oh, do you do some of this AI stuff? And I was like, yeah, yeah, I like this AI stuff. Are you doing AI stuff? It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool. And it's like, wow. So like, I spent the next half hour just basically brain dumping all the ways in which AI was stupid to him. And he listened patiently. And I thought he probably wasn't even remember or care or whatever. But yeah, then I kind of like, I guess I re-caught up with him a few months later. And it's like, I've been thinking about everything you said in that conversation. And he like narrated back his response to every part of it, projects he was planning to do. And it's just like, oh, this dude follows up. Holy s**t. And I was like, wow, okay. And he was like, yeah, so we're going to create this new thing called Swift for TensorFlow. And it's going to be like, it's going to be a compiler with auto differentiation built in. And blah, blah, blah. And I was like, why would that help? [00:57:10]Swyx: You know, why would you? [00:57:10]Jeremy: And he was like, okay, with a compiler during the forward pass, you don't have to worry about saving context, you know, because a lot will be optimized in the backward. But I was like, oh my God. Because I didn't really know much about compilers. You know, I spent enough to kind of like, understand the ideas, but it hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. I was like, wow, that's amazing. Okay, you do know, right, that nobody's going to use this unless it's like usable. It's like, yeah, I know, right. So I was thinking you should create like a fast AI for this. So, okay, but I don't even know Swift. And he was like, well, why don't you start learning it? And if you have any questions, ask me. It's just like, holy s**t. Like, not only has Chris Latner lowered his standards enough to talk to me, but he's offering me personal tutoring on the programming language that he made. So I was just like, I'm not g
LISTEN, I know sex toys are all the rage, but I standby my belief that a bedside table's bestie is lube. A little goes a longgg way, and it truly has the capability of enhancing your entire sex life in just a few drops. Today on Bedside I'm chatting with co-founders, Hannah & Stephanie of Personal Fav, about all things lube. We discuss the different types of lubricant, how to shop for a hesitant partner, and where to begin around improving your sex life! On this episode we cover:different types of lubewhen to use what lubeclean ingredients 101enhancing arousal addressing bedroom stigma improving your sex lifespicing things UPsex shop date night
Este episodio fue grabado y publicado originalmente en 2020. Nuestras entrevistas en "Receta Del Éxito" están diseñadas para ser atemporales (evergreen), y hacemos todo lo posible por confirmar que todas las ofertas y URLs mencionadas en estos episodios archivados siguen siendo relevantes. El doctor Teddy Cobeña a lo largo de su carrera ha estudiado y analizado a sus pacientes y ha llegado a la conclusión de que según cómo diriges tu energía puedes tener una calidad de vida u otra, también nos enseña cómo enamorarnos de nuestras ideas y lograr un emprendimiento exitoso. Suscribete y Visitanos en: www.RecetaDelExito.com Apple Podcast (iTunes): https://apple.co/2Igcnoh Listo para Crear tu Podcast? www.CursoDePodcastGratis.com Twitter Handle: @alexdalirizo Facebook Page: https://www.facebook.com/recetadelexito/ RDExito: http://recetadelexito.com Instagram: https://www.instagram.com/recetdelexito/ Instagram: https://www.instagram.com/alexdalirizo/ Spotify: https://spoti.fi/3cmJqVs
Universal Lesson: Embrace that you've already arrived. This week I'm coming to you with a solo episode and we're talking ALL about MANIFESTATION. More specifically, how using a Monthly Archetype System for the past 9 months has completely shifted and transformed my life! I share what has worked for me most in this process and dish some of my current manifestations and mantras for this fall season. Happy Spooky/ Libra Season for those who practice!!! On this episode we're chatting:monthly archetype systemrecap & takeaways 9 months inintegrating pleasure practicesembracing seasonal shiftsmy current manifestationspower of self-connectionslowing down to speed upMentioned Resourcesmonthly archetype episode: Craft A Monthly Archetype
John Mather is a Senior Astrophysicist in the Observational Cosmology Laboratory at NASA's Goddard Space Flight Center. He was the recipient of the 2006 Nobel Prize in Physics for his role as Principle Investigator for the Far IR Absolute Spectrophotometer on COBE, which observed the cosmic microwave background and helped support the big bang theory of the origin of the universe. John has also worked on many other projects for NASA, including the James Webb Space Telescope. In this episode, Robinson and John discuss the big bang and the cosmic microwave background before detailing the COBE satellite, its extraordinary findings, and the work that led to winning the Nobel Prize. The Very First Light: https://a.co/d/6iaWMOK OUTLINE 00:00 In This Episode… 00:35 Introduction 02:56 John's Scientific Background 12:50 Where Did the Big Bang Theory Come From 22:28 The Electromagnetic Spectrum 27:48 John's Thesis and the Road to COBE 42:57 Designing the Nobel-Winning COBE Satellite 01:05:38 Some Further Background 01:08:08 The Cosmic Microwave Background and the Nobel Prize 01:35:52 John's More Recent Projects Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
This week I'm coming to you with a solo episode and we're talking ALL about my hormone and skin health journey. This has been a long time coming and I can truly say in the past few months I've healed my hormonal acne from the inside out. It's been a total transformation and I'm dishing my entire "acne protocol" and all the rituals and changes I've made to rebalance my hormones holistically. On this episode we're chatting:hormonal birth controlmy acne protocolneedle moving ritualsbalancing hormones pleasure practices supplementing & nourishing manifestation Mentioned Resourcesmonthly archetype episode: Craft A Monthly Archetype
120: Even Superheroes Start SmallThis episode is talking all about habits and how smaller is always better! When deciding on a healthier lifestyle and making those important changes, it is always better to start small. And even superheroes had to learn how to fly but just starting small - one step at a time. Links mentioned in this episode:James Clear Book: Atomic HabitsEpisode 57: Tiny Improvements, Superhero ResultsBe sure to connect with Lisa Barwise and Warrior Goddess Kettlebell Training on social media:Instagram@lisa_barwise@wgkettlebelltrainingFacebookwww.facebook.com/warriorgoddesskettlebelltrainingYoutubehttps://www.youtube.com/warriorgoddesskettlebelltrainingWhat you can do to help the Podcast?If this podcast means anything to you and you want to support it. Simply Subscribe & Review in Apple Podcasts.Apple Podcasts is one of the only platforms where you can both subscribe and review.How to Subscribe or Follow The Podcast1. Open Apple Podcast App.2. Go to the icons at the bottom of the screen and choose “search3. Search for “Goddess Got Goals”4. Hit the top Right Hand "+" sign5. Open Spotify6. Search for “Goddess Got Goals”7. Hit the 'Follow' underneath the imageHow to Leave a Podcast ReviewOpen Apple Podcast App.Go to the icons at the bottom of the screen and choose “search”Search for “Goddess Got Goals”Click on the SHOW, not the episode.Scroll all the way down to “Ratings and Reviews”Click on “Write a Review”This is the best way for us to reach more people and of course let us know that our episodes mean something to you!Join us in our online Bootcamp- Be your Own SuperheroBe sure to connect with Lisa Barwise and Warrior Goddess Kettlebell Training on social media:Instagram@lisa_barwise@wgkettlebelltrainingFacebookwww.facebook.com/warriorgoddesskettlebelltrainingYoutubehttps://www.youtube.coBe sure to connect with Lisa Barwise and Warrior Goddess Kettlebell Training on social media: Instagram @lisa_barwise @wgkettlebelltraining Facebook www.facebook.com/warriorgoddesskettlebelltrainingYoutubehttps://www.youtube.com/warriorgoddesskettlebelltraining What you can do to help the Podcast? If this podcast means anything to you and you want to support it. Simply Subscribe & Review in Apple Podcasts.Apple Podcasts is one of the only platforms where you can both subscribe and review.How to Subscribe or Follow The Podcast1. Open Apple Podcast App.2. Go to the icons at the bottom of the screen and choose “search3. Search for “Goddess Got Goals”4. Hit the top Right Hand "+" sign5. Open Spotify 6. Search for “Goddess Got Goals”7. Hit the 'Follow' underneath the image How to Leave a Podcast Review Open Apple Podcast App. Go to the icons at the bottom of the screen and choose “search” Search for “Goddess Got Goals” Click on the SHOW, not the episode. Scroll all the way down to “Ratings and Reviews” Click on “Write a Review” This is the best way for us to reach more people and of course let us know that our episodes mean something to you!...
From artificial limbs to memory foam, many inventions have emerged from our quest to understand the cosmos. In this episode we explore cosmic history, space's impact on technology, and the enduring human fascination with space exploration. To take us on this journey is astrophysicist John Mather, a Nobel Prize winner for his work on the COBE satellite and a key figure in the James Webb Space Telescope project.Prepare to be intrigued and left with a sense of wonder about the universe's influence on our world. Topics Covered:00:00 – Innovations through the pursuit of space03:52– John's early life 06:44 – Proving The Big Bang Theory 13:30 – The mysteries of quantum mechanics15:10 – Leading the James Webb Telescope17:12 – Images from James Webb20:32 – Are we alone? 24:20 – New telescopes25:18 – Engineering in space for earth29:31 – What would you like to see solved in your lifetime?32:24 – What came before The Big Bang? 25:04 – Misconceptions about space37:17 – Can humans be a multiplanetary species?38:20 - Private vs public spending in space40:24 – What's the future of space exploration? Resources:COBE satellite imagery: https://www.nasa.gov/topics/universe/features/cobe_20th.htmlImages from the James Webb Telescope: https://webbtelescope.org/imagesExoplanet transmission spectrum: https://webbtelescope.org/contents/media/images/2022/032/01G72VSFW756JW5SXWV1HYMQK4 Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain
Dr. John Mather is a Senior Astrophysicist in the Observational Cosmology Laboratory located at NASA's Goddard Space Flight Center, Greenbelt, MD.* He is also the Senior Project Scientist on the James Webb Space Telescope, which will be the largest, most powerful and complex space telescope ever built and launched into space. It will fundamentally alter our understanding of the universe. Mather was winner of the 2006 Nobel Prize for Physics with George Smoot for their work in the Cosmic Background Explorer (COBE) mission in the mid 1970s to measure the heat radiation from the Big Bang. Mather and his team measured the cosmic microwave background radiation—basically very faint radio noise astronomers had theorized could only come from the most distant events at the beginning of time as we know it—and their measurements confirmed the Big Bang theory to extraordinary accuracy. The James Webb Space Telescope (JWST) is a large infrared telescope will be the premier space observatory of the next decade, and Mather has been the Senior Scientist on this project from it's origin in 1995. The James Webb is scheduled to launch in 2021 and will study every phase in the history of our Universe, ranging from the first luminous glows after the Big Bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own Solar System. We discuss Mather's long career at NASA's Goddard Space Flight Center, his work on COBE and JWST, the Hubble Space Telescope, and the planning of the Nancy Roman Grace Space Telescope. *This episode was origionally published in February 2021.
This week I'm coming in HOT with a solo episode and we're talking ALL about UPLEVELING your life. Call it "hot girl summer", call it a "glow up", we're getting into upgrading your OS. Whether you've been sensing an upgrade for a long time coming, or you're just dipping your toes in, we're covering the basics and foundation for all things FUTURE YOU.On this episode we're chatting:upgrades & uplevelsgraduating relationshipsmanifestationbuilding a foundationfinding support systemssexual energytapping into desire
Many parents are seeking ways to give their children a better head start financially in life.Our guest, Ksenia Yudina, is a chartered financial analyst, a Mom, and the Founder and CEO of UNest, an app that empowers parents to easily invest and save for their kids' future.After coming to the U.S. from Russia at 18, Ksenia took on $180,000 in debt to pay for college. She quickly realized that many other graduates were strapped with debt and saw a need for a more accessible savings vehicle than the traditional 529 that requires confusing paperwork and fees.In 2020, Ksenia launched UNest, and has raised over $38 million from well-known VCs to build her fintech platform into a popular savings and investment vehicle for parents.With many of UNest's team members located in Ukraine, on February 24, 2022, Ksenia had to work around the clock to evacuate her team members to safety.To learn more about UNest and download the app, please visit: https://www.unest.coBe sure to follow UNest on LinkedIn and Twitter here:https://www.linkedin.com/company/u-nest/https://twitter.com/UNestThank you for carving out time to improve your Founder Game - when you do better, your startup will do better - cheers!Ande ♥https://andelyons.combestpodcastforstartups #startupstories #startuplifeJOIN STARTUP LIFE LIVE MEETUP GROUPGet an alert whenever I post a new show!https://bit.ly/StartupLifeLIVECONNECT WITH ME ONLINE:https://twitter.com/AndeLyonshttps://www.linkedin.com/in/andelyons/https://www.instagram.com/ande_lyons/TikTok: @andelyonsANDELICIOUS ANNOUNCEMENTSJoin Innovation Women here: https://bit.ly/AndeInnoWomenArlan's Academy: https://arlansacademy.com/Scroobious - use Ande15 discount code: https://www.scroobious.com/How to Raise a Seed Round: https://bit.ly/AAElizabethYinTune in to Mia Voss' Shit We Don't Talk About podcast here: https://shitwedonttalkaboutpodcast.com/SPONSORSHIPIf you resonate with the show's mission of amplifying diverse founder voices while serving first-time founders around the world, please reach out to me to learn more about making an impact through sponsoring the Startup Life LIVE Show! ande@andelyons.com. Ande ♥ #fintechfounder #startupstories #raisingcapital #femalefounder #startupstrategiesandadvice