POPULARITY
Send Dr. Li a text here. Please leave your email address if you would like a reply, thanks.In this uplifting episode of the Make Time for Success podcast, Dr. Christine Li shares her favorite habit for breaking through negativity and feeling stuck: celebrating your wins, no matter how small. Drawing from her own experience with group coaching and the success of her clients, she explains how taking time to recognize your achievements can boost your momentum, consistency, and overall joy. Dr. Li offers four practical tips to help you build this habit into your daily routine—including keeping a "done list," infusing fun into your work, embracing a mindset of certainty, and not boxing yourself in—so you can end each day feeling accomplished and motivated. Plus, she shares a free downloadable worksheet to help you get started on your own winning list!Timestamps:00:00 "Success Habits Unleashed"03:16 Celebrate Wins for Productivity Boost09:53 Embrace Imperfection, Find Joy11:25 Embrace Certainty for Success15:52 Embrace Potential, Defy Limitations18:51 "Noticing Wins Sparks Growth"To get the Celebrating Your Wins worksheet (free), go to: https://maketimeforsuccesspodcast.com/winninglistTo sign up for the Waitlist for the Simply Productive Program, go to https://maketimeforsuccesspodcast.com/SPFor more information on the Make Time for Success podcast, visit: https://www.maketimeforsuccesspodcast.comGain Access to Dr. Christine Li's Free Resource Library -- 12 downloadable tools and templates to help you bypass the impulse to procrastinate: https://procrastinationcoach.mykajabi.com/freelibraryTo work with Dr. Li on a weekly basis in her coaching and accountability program, register for The Success Lab here: https://www.procrastinationcoach.com/labConnect with Us!Dr. Christine LiWebsite: https://www.procrastinationcoach.comFacebook Group: https://www.facebook.com/groups/procrastinationcoachInstagram: https://www.instagram.com/procrastinationcoach/TikTok: https://www.tiktok.com/@procrastinationcoachThe Success Lab: https://maketimeforsuccesspodcast.com/lab Simply Productive: https://maketimeforsuccesspodcast.com/SPIf you're wondering how you'll ever find the time, energy, and motivation to handle your clutter, the free Re-Energize Your Home 5-Day Challenge was designed specifically for you. Starting May 26th, we will band together to raise our energy and quickly move the physical clutter out so our mental and emotional space can expand -- for the better. Join this fun, free event here: https://maketimeforsuccesspodcast.com/win
In this video, we'll explore the healing benefits of taking a personal Sabbath and how it effectively counteracts the relentless demands of a culture that consistently requires “More” on Soul02. Connect with us: YouTube: YouTube.com/@soul02-oxygen Facebook: @LP.Oxygen https://www.facebook.com/LP.Oxygen Instagram: LP.Oxygen Twitter: @Soul025 Buzzsprout: Soul02-Buzzsprout Spotify: Soul02 - Spotify Apple: Soul02-Itunes Stitcher: Soul02-Stitcher
THINK WITH FARAH - Entrepreneuriat, développement personnel et émotionnel
Et si la raison pour laquelle ton marketing ne fonctionne pas… c'était qu'il n'est pas aligné avec TON énergie et TON pouvoir personnel ? Trop d'entrepreneurs appliquent des stratégies vues ailleurs, sans se demander si elles leur correspondent réellement. Résultat ? Fatigue, manque de résultats, frustration, et une communication qui sonne faux. Bonne nouvelle : tu peux vendre avec puissance SANS te forcer. Tu peux attirer des clients en restant 100% alignée. Mais pour ça, tu dois bâtir un marketing qui est le REFLET de ton pouvoir personnel.
Dans cet épisode, je vous réfléchis à voix haute, mais aussi grâce à vos retours sur instagram suite à mes interrogations. Merci encore pour l'échange que j'ai pu avoir avec certain.e.s.
durée : 00:31:13 - Les Nuits de France Culture - par : Albane Penaranda - En 1985, Pierre Boulez, âgé de 60 ans, était à l'honneur sur France Culture. Dans le quatrième des cinq entretiens donnés à Michèle Reverdy, l'homme s'exprimait sur la mise en scène de la musique, les concerts de bandes magnétiques, l'opéra, et sur son intérêt pour les musiques extra-européennes. - réalisation : Emily Vallat - invités : Pierre Boulez Compositeur, chef d'orchestre et pédagogue français (Montbrison, 1925 - Baden Baden 2016)
un message de Björn Lükte pour l'église Connect Annecy.Toutes nos excuses pour le moment coupé sur le podcast ... nous avons eu un petit problème technique
durée : 00:04:25 - Le Pourquoi du comment : philo - par : Frédéric Worms - Pourquoi attribuons-nous des titres ? Reflètent-ils un mérite individuel ou une hiérarchie sociale ? Un titre est-il une simple fonction ou une reconnaissance ? Quelle est la différence entre maître et disciple ? Une société peut-elle exister sans distinction de titres ? - réalisation : Brice Garcia
After going to AA for a year, The judge cut me a break, only fine me $3,500 in the early 80s that was a huge fine!
Avec l'énergéticienne Amélie AuraHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
En quoi notre cœur est-il comparable à de l'eau ? Comment arriver à aimer une personne qu'on déteste particulièrement ? Réponse à travers des propos du roi Chlomo.
Liv Hospital Samsun - Doç. Dr. Recep Aktimur - Reflü Cerrahisi by Teoman Aydoğan x marsnewmedia
Benjamin Morel est maître de conférences à Paris II. Mention légales : Vos données de connexion, dont votre adresse IP, sont traités par Radio Classique, responsable de traitement, sur la base de son intérêt légitime, par l'intermédiaire de son sous-traitant Ausha, à des fins de réalisation de statistiques agréées et de lutte contre la fraude. Ces données sont supprimées en temps réel pour la finalité statistique et sous cinq mois à compter de la collecte à des fins de lutte contre la fraude. Pour plus d'informations sur les traitements réalisés par Radio Classique et exercer vos droits, consultez notre Politique de confidentialité.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Una de las elecciones más poderosas que podemos tomar para integrar lo vivido y MANIFESTAR MÁS, es soltar lo que ya no suma, soltar lo que ya te soltó y elegir conscientemente las personas, situaciones y cosas de las que quieres llenar tu días y tu vida.Poder mirar los apegos que tenemos hacia ciertas cosas, personas y dinámicas que nos recuerdan viejas versiones de nosotros y recordar que podemos integrar en nuestra memoria celular y energética lo vivido y su enseñanza, y soltar lo que ya no suma. Y programar a nuestro ser a buscar y encontrar la magia en el cosmos del que somos parte y en nosotros mismos. Tener los ojos y el corazón abiertos a reconocer el regalo que habita en todo, y entender que todo lo que sucede en nuestra vida está al servicio de nuestra evolución y trascendencia. Bienvenido a esta nueva serie inspirada en los temas que voy a tocar el curso que voy a dar en enero: ASCENDIENDO EN ESPIRAL, un espacio en donde compartiré herramientas, información y prácticas para alienarte con tu más Alto Destino Cósmico y convertirte en un imán que atrae todo lo que que está destinado para ti y que te corresponde por Derecho Divino.Curso online — Ascendiendo en Espiral@sofiaalva___
53 ans de dictature viennent de tomber. 53 années de régime autoritaire, répressif, sanguinaire. 53 ans d'un clan Assad qui a régné sans partage sur toute la Syrie, confisquant toutes ses richesses, préservé par les puissances occidentales qui voyait dans ce régime un élément de stabilité au Proche-Orient. À quel prix pour les Syriens ? Depuis près d'une semaine, les médias français se sont passionnés, à raison, pour le moment historique de la chute du régime sy ...
Écoutez une méditation quotidienne tirée du livre "Parle Seigneur, ton serviteur écoute". Les méditations de ce recueil proviennent d'exhortations données par Daniel Issarte dans le cadre de la vie de collaboration de la Mission Timothée : réunions d'équipe, voyages missionnaires, simples échanges fraternels. Il en ressort un ton assez personnel et une lecture du texte biblique qui se veut simple, pratique, à la rencontre des préoccupations du quotidien et de l'œuvre, en vue de la prière. Les sujets abordés sont très divers, mais chaque page et chaque sujet nous replacent devant cette interrogation cruciale : « Quand les fondements sont renversés, le juste, que ferait-il ? » (Psaume 11 : 13). Seule la révélation de l'Écriture est propre à nous guider dans une voie juste en toute circonstance. C'est en méditant le texte biblique dans un esprit d'écoute et de foi que nous la recevrons dans toute sa simplicité et sa force. Alors en ouvrant ce livre, nous pouvons disposer nos cœurs comme le jeune Samuel et dire à sa suite « Parle Seigneur, ton Serviteur écoute » (1 Samuel 3 : 10). https://www.missiontimothee.fr/media/153/Parle-Seigneur-ton-serviteur-écoute
Refléjate en tu respiro y haz que élse refleje en ti, por lo que deberíasconcentrarte solo en tu respiro,observarte bien. Con cada respiro ve atendiendo cada parte de ti,que no exista tensión ni malestar, ni molestia.Entrega tu físico, deposítalo ahí,en el espacio que ocupas.Que cada respiro te ayude. Observa cómo has avanzado,si existe facilidad o, por el contrario,te contraes, se te dificulta.De ser así, respira más,con tal sutilidad, no emitas sonido,ni te ocupes en movimientos,inmóvil, silencioso, sostenidoy en cierta forma liberado. Si la mente te ocupa,libérala también con cada respiro,sin anular, sin sofocar,sin inhibir ningún pensamientotan solo permite que fluyan. Que todo transite a través de tiy que no haya atasco, demora,traba, bloqueo.Haz que el respiro te liberede cualquier tensión, inquietud, inhibición. Nuestro cuerpo, nuestra alma aquí,en libre tránsito, en perfecta elección,en tiempo medido, en configuración exacta,en petición de amor, en tarea.Y el respiro que te diceni más ni menos lo que tienes que hacer,cuándo, dónde, cómo y hasta qué momento,cuando decidas cesar.Te llenas brevemente y te vacías ampliamente. Nuestro cuerpo y nuestra alma aquí,conscientes hoy, y todo lo que ocurre,y todo lo que está pasando,y todo el planeta activo,sofocando el vértigo,atestiguando la destrucción,padeciendo la inconsciencia,resistiendo la inconformidady brindando todo. ¿Qué te falta?¿Qué no existe?¿Qué no encuentras?¿Qué exiges?¿Qué tomas?¿Qué das? Tu cuerpo y tu almaen este depósito de conciencias-los más, sin saber aún qué hacen aquí,qué o quién los depositó,con qué plan, qué objetivo, cuál propósito-,siendo más que un recurso,conteniendo más que una esperanza,manifestando más que la vida misma. Y el respiro quedo, silencioso, sosteniendo,aliviado, sabiéndose en el estadio perfecto,para esta elevación.¿O prefieres quedarte en el depósito?Haciendo bulto, siendo usado,contabilizado, detectado, manejado. ¿Quién te saca de aquí?¿Quién se va a atrever?¿Quién se hartó ya?¿Quién quiere renunciar?¿Quién se niega a matar su tiempo?¿Quién avista su libertad?¿Quién ansía ser más? El respiro cesa, el cuerpo ni pesa,y el alma tampoco.El respiro brinda su aliento,tan interno.Y los misiles caen,y los cuerpos caen,y quien ejecuta, nada sabe.Y de alguna forma, quien cae, te liberay tú liberas a quien cae,y se manifiesta la conciencia,y estás ahí, eres ahí, respiras ahí. Nunca más elijas este depósito,hazte propósito, recuerda más,recuerda ésto y acuérdate de Aquello.Retorna al principio,aquí nunca habrá final y menos, gozoso. Aspira a tu Ser.Aspira Gloria.Aspira paz.Nunca más. Recoge tu respiro,atiende tu alma,retorna a tu cuerpo,abre los ojos, estás aquí.Considera eso.Ve lo que pasa.Considera eso. Respira profundo,restablecete aquí y ahora,retoma fuerzas y recuerda:sé más.Agradécete. Om Namaha Shivaya
Mention légales : Vos données de connexion, dont votre adresse IP, sont traités par Radio Classique, responsable de traitement, sur la base de son intérêt légitime, par l'intermédiaire de son sous-traitant Ausha, à des fins de réalisation de statistiques agréées et de lutte contre la fraude. Ces données sont supprimées en temps réel pour la finalité statistique et sous cinq mois à compter de la collecte à des fins de lutte contre la fraude. Pour plus d'informations sur les traitements réalisés par Radio Classique et exercer vos droits, consultez notre Politique de confidentialité.Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Description: Peu de gens réalisent que l'apôtre Paul accorde autant d'importance aux relations humaines qu'à la doctrine de la justification par la foi. Cela nous montre clairement à quel point les relations entre frères et sœurs sont cruciales aux yeux de Dieu. Aujourd'hui, nous allons explorer ensemble des principes bibliques qui favorisent des relations saines et épanouissantes, afin de mieux refléter l'amour de Jésus dans notre quotidien et dans nos interactions avec les autres. #ensemble #4 À l'église Le Sentier, nous sommes vivement convaincus de l'impact que nous avons, par la grâce du Seigneur, de rallier les frères et sœurs de l'Église. Nous désirons poursuivre dans notre mission de saturer notre milieu de l'Évangile, non seulement au sein même de notre église et de la ville, mais également en ligne, en faisant rayonner Le Sentier sur l'étendue de la toile. Veuillez considérer souscrire à la chaîne YouTube de l'église https://www.youtube.com/egliselesentier?sub_confirmation=1 Suivez l'église Le Sentier sur : - Facebook (https://www.facebook.com/egliselesentier) - YouTube (https://www.youtube.com/user/egliselesentier) - Site Web (https://www.egliselesentier.com) #gatineau #eglise #egliselesentier #jesus
Dans cet épisode du Mindset Show, nous explorons un concept fondamental pour affronter les moments difficiles avec force et détermination : la résilience. Apprendre à rebondir après un échec, à s'adapter face à l'imprévu, et à transformer les défis en opportunités, c'est ce que nous allons approfondir aujourd'hui.Nous découvrirons :Qu'est-ce que la résilience ? Une capacité essentielle à se relever après un coup dur, en apprenant et en devenant plus fort.Les piliers d'une mentalité résiliente : Comment changer sa perception des défis, cultiver la patience, la flexibilité et éviter le piège de la victimisation.Techniques concrètes pour renforcer la résilience : Créer un réseau de soutien, pratiquer la pleine conscience et la gratitude, fixer des objectifs réalistes et prendre soin de sa santé physique.Transformer l'adversité en force : Refléter sur les leçons apprises et redéfinir son histoire personnelle pour se focaliser sur les capacités acquises face aux difficultés.Rejoignez-nous pour comprendre comment développer cette force intérieure qui vous permettra de grandir à travers chaque épreuve, et ainsi mener une vie plus sereine et accomplie.À demain pour un nouvel épisode qui vous aidera à libérer votre plein potentiel !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Roland Garros au quotidien avec mes retours et points de vue sur le tournoi ! Aujourd'hui récap de la journée 13 !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Comment les vêtements reflètent notre société
Laura de Colaurama nous parle de son métier: la décoration. Oui mais une décoration consciente et écologique. Une décoration pas comme les autres qui cherche a faire ressortir les éléments et les énergies dont nous avons besoin au quotidien pour notre bine être. On parle aussi de feng shui pour pouvoir aller encore plus loin dans le bien être.Son site internet : https://colaurama.fr/Son Instagram Colaurama: https://www.instagram.com/colaurama/Me suivre sur Insta: https://www.instagram.com/tout_en_ordre/Le site Tout en Ordre: https://www.tout-en-ordre.fr/Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Refléter en soi sans réfléchir Ce qui, de toi, parle en moi n'est peut-être pas vraiment toi ; l'interface plutôt où, quoiqu'il arrive, en cette ouverture au-dedans, face à toi, je sens et approche cet autre... que je suis, sans doute pas vraiment moi... La projection est là, c'est d'ici, de ce point de contact, qu'elle part. Ses rayons se déploient encore plus loin, encore après. Quels miroirs intérieurs témoignent du cœur de la rencontre ? De ces faces minérales, immuables et froides que plaquent les Hommes dans tous leurs lieux de vie pour ajuster leurs masques, ou bien ces aires fluides, naturelles et plissées d'émotions, frontières entre sensibles profondeurs à la chaleur d'un souffle ou du gel de la terre ? Question d'acuité, de conscience, de sens, d'intention. Alors, détaché, devantla voie ouverte par cette lumière, je peux très bien ici entrer dans le miroir sentir et ne pas réfléchir, à l'origine des reflets. Merci, sœur ou frère, ami, petit tyran, cet « Autre » face à moi que tu es, tu touches et frottes pareillement que ce soit où j'ai mal ou là où je vais bien. De ce qui me gêne et m'irrite chez toi, je vois ce que chez moi je voudrais tant changer. Ce qui me pique de tes attentes, atteintes et critiques, depuis longtemps, je le réprime en moi et devrais l'accepter pour au moins m'en guérir. Ce que je reconnais et assume de ce qui t'agace en moi, que tu refuses et me pousses à revoir, ne parle que de toi. Tout ce que je trouve admirable et aime chez toi me touche et me ravit, parce que je l'ai aussi. Nous pataugeons dans les mêmes eaux et chacun, balloté par les remous de l'autre y apprend à nager, à se tenir à flots. Il me faut te laisser à tes bonnes raisons de faire, de retenir, de taire ou bien de dire pour panser tes impacts et colmater tes failles. Je retiens la leçon que notre lien révèle, s'offrant à mon éveil : l'autre m'est salutaire en miroir innocent. --- (Rediffusion)Texte déposé : ©Renaud SoubiseMusique : ©Villa-Lobos - Bachianas Brasileiras No. 5 - Mouvement 1 Aria Cantilena, texte de Ruth Valadares Correa (1938)Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Votre métier est une passion ? Dites-nous en commentaire ! L'argent que l'on gagne est-il proportionnel à l'effort que l'on fournit dans notre travail ? Quel regard les autres portent-ils sur votre travail ? Les réseaux sociaux permettent-ils de se faire de l'argent plus "facilement" ? Aujourd'hui, nous recevons dans notre format VERSUS, Ludovic Franceschet, éboueur et créateur de contenu. Et Maryline, plus connue sous le nom de «Sweetbodymary», créatrice de contenu pour adultes. Cette rencontre a pour but d'échanger sur le rapport au travail et à l'argent, d'évoquer les métiers dits "anciens" et de leur place par rapport aux nouveaux métiers. DANS CETTE ÉMISSION : 00:00 : Vrai/Faux 01:07 : Votre salaire est-il à la hauteur du travail que vous fournissez ? 08:05 : Quels préjugés sur le métier de l'autre ? 13:13 : Le regard des autres sur votre métier ? 14:59 : Qu'est-ce qui a motivé votre choix de carrière ? 19:47 : Que pense votre entourage de votre métier ? 23:33 : Maryline, tu portes un message sur les réseaux ? 25:24 : Faire le métier de l'autre ? 30:49 : Faites-vous votre travail pour les autres ? 35:24 : Conseilleriez-vous votre métier ? 43:37 : C'est quoi réussir ? 44:43 : Un conseil pour la personne en face de vous ? 45:09 : Le mot de la fin. Le Crayon est sur tous les réseaux ! ► Instagram : https://www.instagram.com/lecrayonmedia/ ► Tiktok : https://www.tiktok.com/lecrayonmedia/ ► Facebook : https://www.facebook.com/lecrayonmedia/ ► Articles : https://lecrayon.kessel.media/ ► LinkedIn : https://www.linkedin.com/company/le-crayon-politique/ ► Podcasts : https://podcasters.spotify.com/pod/show/le-crayon ► X : https://twitter.com/lecrayonmedia/ ► Notre site : https://www.lecrayongroupe.fr/
¡Regalo GRATIS en nuestra LISTA DE CORREO! ➡️https://www.letraminuscula.com/suscribirse-lista-de-correo/ Visita nuestra WEB https://www.letraminuscula.com/ SI deseas PUBLICAR escríbenos : contacto@letraminuscula.com Llámanos☎ o escríbenos por WhatsApp:+34640667855 ¡SUSCRÍBETE al canal! CLIC AQUÍ: https://bit.ly/2Wv1fdX RESUMEN: Roberto Augusto de Editorial Letra Minúscula discute la moda de la lectura rápida y su relevancia para escritores, contrastando la eficacia de leer profundamente frente a técnicas rápidas. Explica que aunque la lectura rápida puede ser útil en contextos específicos como estudios o correcciones, generalmente se pierden detalles importantes que solo se aprecian al leer detenidamente. Destaca la importancia de la calidad sobre la cantidad en la lectura y sugiere que la verdadera comprensión requiere tiempo y relectura, especialmente para obras complejas y artísticas. ⏲MARCAS DE TIEMPO: ▶️00:09 Refl. sobre lectura rápida y su impacto ▶️01:34 Crítica personal a la eficacia de leer rápido ▶️02:48 Práctica constante es mejor que cursos rápidos ▶️03:59 Riesgos de perder matices con lectura rápida ▶️05:23 Valor de leer profundamente sobre cantidad ▶️06:38 Preferencia por calidad y profundidad en lectura
Miközben a vezető részvényindexek oldalaznak, addig a felszín alatt szignifikáns mozgásokat láthatunk. Gyengék egyes a prémium fogyasztókat kiszolgáló részvények (Tesla, Apple, Nike, Starbucks) és a „másokon kapaszkodó” AI papírok. Erősek az olajkapcsolt részvények és a védelmi ipari papírok. Az amerikai inflációs adat nem vidította fel a kamatcsökkentésre váró befektetői társadalmat. A kötvénypiachoz képest nagyon ellenálló az amerikai részvénypiac. A hét legfontosabb eseményeiről Vidovszky Áron, Jónap Richárd és Móró Tamás beszélgetett.Olvass minden nap a világ történéseiről egy Concorde-os szemüvegén keresztül:https://www.concordeblog.hu/Kövess bennünket minden csatornánkon:https://www.linkedin.com/company/concordecsoport/https://www.instagram.com/concordecsoport/https://www.facebook.com/concorde/https://www.youtube.com/@concorde_csoport#concorde #podcast
Ce qu'on porte nous définit-il ? Dites-nous en commentaire ! La mode est-elle éthique ? Les vêtements sont-ils vecteurs d'inégalités ? Ce qu'on porte nous définit-il ? 5 créateurs de contenus sont venus parler de leur rapport à la mode : Mathilde, Farid, Bastien, Virginie et Maxime. Mais eux qui ont déjà TOUS été jugé pour ce qu'ils portent, vont-ils juger les autres ? On les a challengé là-dessus. DANS CETTE ÉMISSION : 00:00 : Introduction 00:23 : À qui appartient l'accessoire ? 02:59 : On me juge pour ce que je porte. 14:39 : Mon style reflète mes idées. 19:31 : Les vêtements sont-ils vecteurs d'inégalités ? 34:00 : Le port de l'uniforme : pour ou contre ? 41:21 : Je suis influencé par ce que je vois sur les réseaux sociaux. 45:00 : L'origine de mes vêtements, je m'en fous. 58:00 : Révélation des accessoires. 01:08:13 : Le mot de la fin. Le Crayon est sur tous les réseaux ! ► Instagram : https://www.instagram.com/lecrayonmedia/ ► Tiktok : https://www.tiktok.com/lecrayonmedia/ ► Facebook : https://www.facebook.com/lecrayonmedia/ ► Articles : https://lecrayon.kessel.media/ ► LinkedIn : https://www.linkedin.com/company/le-crayon-politique/ ► Podcasts : https://podcasters.spotify.com/pod/show/le-crayon ► X : https://twitter.com/lecrayonmedia/ ► Notre site : https://www.lecrayongroupe.fr/
AA readings. --- Send in a voice message: https://podcasters.spotify.com/pod/show/fernando-montes-de-oca/message Support this podcast: https://podcasters.spotify.com/pod/show/fernando-montes-de-oca/support
Dans ce nouvel épisode du balado de Savoir FAC, Julie Fitzbay et Audrée Morin, directrices, stratégies et transfert d'entreprise à FAC, se penchent sur le testament, élément central de tout transfert d'entreprise, mais souvent sous-estimé. Les deux spécialistes démystifient la planification testamentaire et successorale en donnant de précieux conseils et outils pour en maximiser sa portée.
Cendres est le galeriste de la galerie l'Etranger et Antho est le directeur du label de rap 21h10. Tous les deux mettent en avant des artistes que se soient à travers des expositions ou des concerts. Ils forgent à leur échelle le paysage culturel. Mais alors comment choisissent ils les artistes qu'ils vont diffuser ? Qu'est ce que cela dit d'eux ? Comment garder de la cohérence, une direction artistique claire ? Est-ce que cela révèle une envie de se cacher derrière les autres pour se faire oublier ? Ces questionnements ont conduit à se demander si leur rôle n'était pas, au final, de rendre intelligible la mélancolie du monde. Dans le fond existe-t-il un art heureux ? Pour les grenoblois, la galerie vous donne rendez vous pour son exposition "La vie des autres" pour reconstruire les chemins de l'empathie par Hugo Chazot et Théo Lalliot à partir du 1er février 2024, mais également, le 12 février pour une table ronde sur la perception du monde arabe en partenariat avec l'association monde arabe de Sciences po. Aurélia :)
Écoutez une méditation quotidienne tirée du livre "Parle Seigneur, ton serviteur écoute". Les méditations de ce recueil proviennent d'exhortations données par Daniel Issarte dans le cadre de la vie de collaboration de la Mission Timothée : réunions d'équipe, voyages missionnaires, simples échanges fraternels. Il en ressort un ton assez personnel et une lecture du texte biblique qui se veut simple, pratique, à la rencontre des préoccupations du quotidien et de l'œuvre, en vue de la prière. Les sujets abordés sont très divers, mais chaque page et chaque sujet nous replacent devant cette interrogation cruciale : « Quand les fondements sont renversés, le juste, que ferait-il ? » (Psaume 11 : 13). Seule la révélation de l'Écriture est propre à nous guider dans une voie juste en toute circonstance. C'est en méditant le texte biblique dans un esprit d'écoute et de foi que nous la recevrons dans toute sa simplicité et sa force. Alors en ouvrant ce livre, nous pouvons disposer nos cœurs comme le jeune Samuel et dire à sa suite « Parle Seigneur, ton Serviteur écoute » (1 Samuel 3 : 10). www.missiontimothee.fr/parole-partagee-bdd/ouvrage
Chaar Hayi'houd véhaémouna #17 Ch. 6.1: L'art reflète t-il la réalité ? Dans le monde, il y a deux parties: humaine et divine. Y a-t-il un lien entre eux ?
Vendredi 3 novembre, l'évolution des marchés avec les contextes géopolitiques et macroéconomiques actuels, a été abordée par Nathalie Pelras, directrice de la gestion de Fourpoints, Eric Lewin, rédacteur en chef des Publications Agora, Patrice Gautry, chef économiste de l'Union Bancaire Privée, et Ana Boata, directrice de recherche économique d'Allianz trade, reçus par Marc Fiorentino dans l'émission C'est Votre Argent sur BFM Business. Retrouvez l'émission le vendredi et réécoutez la en podcast.
Souriez, voici l'actu qui redonne foi en l'humanité. L'Islande, ce n'est pas seulement des volcans, des aurores boréales et du skyr surprotéiné, c'est aussi le pays qui bat des records en matière de parité. Ce qui n'empêche pas les Islandaises de rentrer en grève générale aujourd'hui. Cela suffira-t-il à détruire le patriarcat en Islande ?Cette actu est tirée de 10 minutes pour sauver le monde, la quotidienne info de So good Radio. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Asalam aleykoum mes soeurs d'amours, j'espère que vous vous portez bien. On se retrouve aujourd'hui pour un nouvel épisode où on parle de l'art de savoir bien s'exprimer et je vous donne plusieurs conseils pour être une pro pour contrôler son langage et sa manière de parler. En tout cas, j'espère que cet épisode vous aura plu et vous aura un minimum intéressé. C'était un sujet très compliqué à faire pour moi donc soyez indulgentes. Si c'est le cas, n'hésitez à mettre une note et un commentaire sur Apple Podcast. D'ailleurs, venez me parler sur Snapchat ou sur Instagram pour qu'on discute ensemble mes soeurs d'amour ! Bisous mes soeurs d'amour et à bientôt pour un prochain épisode. Qu'Allah vous protège, vous fasse miséricorde et qu'Il vous facilite dans toutes les actions ou épreuves de votre vie. Mon Snapchat et Instagram : sheisilhaam Mon tiktok : __sousouuu
Marc Griffin et Alain Usereau parlent de la dégringolade des Blue Jays, ainsi que de toutes l'action présentement dans les séries de la MLB.
TOC TOC PASTEUR Pasteur, est-ce qu'un chrétien dont le comportement ne reflète pas Christ est vraiment Chrétien ? Vous aussi vous avez une question à poser ? Envoyez votre question à " mslive@vasesdhonneur.info " et j'y répondrai dans TOC TOC PASTEUR tous les mercredis à 12h30 GMT pendant le MSLive ============================= Le savais-tu ? Le Pasteur Mohammed Sanogo est à son 31e livre. Procure-toi l'un de ces ouvrages dès aujourd'hui: En contactant la librairie d'honneur : https://www.facebook.com/Librairiedhonneur Ou via Amazon : https://amzn.to/3odJpdF Pour te permettre d'être toujours édifié, nos contenus sont disponibles sur les réseaux sociaux tels que YouTube, Facebook, Instagram, Tiktok. N'hésite pas à liker, commenter, partager et t'abonner. Youtube : https://www.youtube.com/c/MohammedSANOGO Facebook : https://www.facebook.com/PasteurMohammedSanogo Instagram : https://www.instagram.com/mohammedsanogo Twitter : https://twitter.com/MohammedSanogo Tik Tok : https://tiktok.com/@MohammedSanogo ============================= Que Dieu te bénisse ! --- Send in a voice message: https://podcasters.spotify.com/pod/show/mohammedsanogo/message
Avec Benedicte Devolve, depuis 2017 elle anime des ateliers pour apprendre à révéler sa personnalité profonde et unique grâce aux vêtements, véritable porte d'entrée vers soi et vers les autres. Raphaëlle Hubin, elle a fondé le Foyer de Femmes en 2017 en ayant le désir d’ouvrir des espaces de ressourcement entre femmes dans le quotidien. … Continued
durée : 00:58:50 - Les Cours du Collège de France - par : Merryl Moneghetti - Qu'est ce qui s'est joué entre la France et l'Allemagne autour de l'Enseigne de Gersaint de Watteau ? Pourquoi l'intérêt croissant pour ce tableau ? Reflète-t-il l'esprit français ? Bénédicte Savoy revient sur l'histoire de cette oeuvre et les projections identitaires qu'elle a pu susciter.
We are excited to be the first podcast in the world to release an in-depth interview on the new SOTA in commercially licensed open source models - MosiacML MPT-7B!The Latent Space crew will be at the NYC Lux AI Summit next week, and have two meetups in June. As usual, all events are on the Community page! We are also inviting beta testers for the upcoming AI for Engineers course. See you soon!One of GPT3's biggest limitations is context length - you can only send it up to 4000 tokens (3k words, 6 pages) before it throws a hard error, requiring you to bring in LangChain and other retrieval techniques to process long documents and prompts. But MosaicML recently open sourced MPT-7B, the newest addition to their Foundation Series, with context length going up to 84,000 tokens (63k words, 126 pages):This transformer model, trained from scratch on 1 trillion tokens of text and code (compared to 300B for Pythia and OpenLLaMA, and 800B for StableLM), matches the quality of LLaMA-7B. It was trained on the MosaicML platform in 9.5 days on 440 GPUs with no human intervention, costing approximately $200,000. Unlike many open models, MPT-7B is licensed for commercial use and it's optimized for fast training and inference through FlashAttention and FasterTransformer.They also released 3 finetuned models starting from the base MPT-7B: * MPT-7B-Instruct: finetuned on dolly_hhrlhf, a dataset built on top of dolly-5k (see our Dolly episode for more details). * MPT-7B-Chat: finetuned on the ShareGPT-Vicuna, HC3, Alpaca, Helpful and Harmless, and Evol-Instruct datasets.* MPT-7B-StoryWriter-65k+: it was finetuned with a context length of 65k tokens on a filtered fiction subset of the books3 dataset. While 65k is the advertised size, the team has gotten up to 84k tokens in response when running on a single node A100-80GB GPUs. ALiBi is the dark magic that makes this possible. Turns out The Great Gatsby is only about 68k tokens, so the team used the model to create new epilogues for it!On top of the model checkpoints, the team also open-sourced the entire codebase for pretraining, finetuning, and evaluating MPT via their new MosaicML LLM Foundry. The table we showed above was created using LLM Foundry in-context-learning eval framework itself!In this episode, we chatted with the leads of MPT-7B at Mosaic: Jonathan Frankle, Chief Scientist, and Abhinav Venigalla, Research Scientist who spearheaded the MPT-7B training run. We talked about some of the innovations they've brought into the training process to remove the need for 2am on-call PagerDutys, why the LLM dataset mix is such an important yet dark art, and why some of the traditional multiple-choice benchmarks might not be very helpful for the type of technology we are building.Show Notes* Introducing MPT-7B* Cerebras* Lottery Ticket Hypothesis* Hazy Research* ALiBi* Flash Attention* FasterTransformer* List of naughty words for C4 https://twitter.com/code_star/status/1661386844250963972* What is Sparsity?* Hungry Hungry Hippos* BF16 FPp.s. yes, MPT-7B really is codenamed LLongboi!Timestamps* Introductions [00:00:00]* Intro to Mosaic [00:03:20]* Training and Creating the Models [00:05:45]* Data Choices and the Importance of Repetition [00:08:45]* The Central Question: What Mix of Data Sets Should You Use? [00:10:00]* Evaluation Challenges of LLMs [0:13:00]* Flash Attention [00:16:00]* Fine-tuning for Creativity [00:19:50]* Open Source Licenses and Ethical Considerations [00:23:00]* Training Stability Enhancement [00:25:15]* Data Readiness & Training Preparation [00:30:00]* Dynamic Real-time Model Evaluation [00:34:00]* Open Science for Affordable AI Research [00:36:00]* The Open Approach [00:40:15]* The Future of Mosaic [00:44:11]* Speed and Efficiency [00:48:01]* Trends and Transformers [00:54:00]* Lightning Round and Closing [1:00:55]TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Hey, and today we have Jonathan and Abhi from Mosaic ML. Welcome to our studio.Jonathan: Guys thank you so much for having us. Thanks so much.Swyx: How's it feel?Jonathan: Honestly, I've been doing a lot of podcasts during the pandemic, and it has not been the same.Swyx: No, not the same actually. So you have on your bio that you're primarily based in Boston,Jonathan: New York. New York, yeah. My Twitter bio was a probability distribution over locations.Swyx: Exactly, exactly. So I DMd you because I was obviously very interested in MPT-7B and DMd you, I was like, for the 0.2% of the time that you're in San Francisco, can you come please come to a podcast studio and you're like, I'm there next week.Jonathan: Yeah, it worked out perfectly. Swyx: We're really lucky to have you, I'll read off a few intros that people should know about you and then you can fill in the blanks.So Jonathan, you did your BS and MS at Princeton in programming languages and then found your way into ML for your PhD at MiT where you made a real splash with the lottery ticket hypothesis in 2018, which people can check up on. I think you've done a few podcasts about it over the years, which has been highly influential, and we'll talk about sparse models at Mosaic. You have also had some side [00:01:30] quest. You taught programming for lawyers and you did some law and privacy stuff in, in DC and also did some cryptography stuff. Um, and you've been an assistant professor at Harvard before earning your PhD.Jonathan: I've yet to start.Swyx: You, you yet to start. Okay. But you just got your PhD.Jonathan:. I technically just got my PhD. I was at Mosaic which delayed my defense by about two years. It was, I was at 99% done for two years. Got the job at Harvard, Mosaic started, and I had better things to do than write my dissertation for two years. Swyx: You know, you know, this is very out of order.Jonathan: Like, oh, completely out of order, completely backwards. Go talk to my advisor about that. He's also an advisor at Mosaic and has been from the beginning. And, you know, go talk to him about finishing on time.Swyx: Great, great, great. And just to fill it out, Abhi, you did your BS and MS and MIT, you were a researcher at Cerebras, and you're now a research scientist at Mosaic. Just before we go into Mosaic stuff, I'm actually very curious about Cereus and, uh, just that, that space in general. Um, what are they doing that people should know about?Abhinav: Yeah, absolutely. Um, I think the biggest thing about CEREUS is that they're really building, you know, kind of the NextGen computing platform beyond, like GPUs.Um, they're trying to build a system that uses an entire wafer, you know, rather than cutting up a wafer into smaller chips and trying to train a model on that entire system, or actually more recently on many such wafers. Um, so it's, and it's really extraordinary. I think it's like the first time ever that kind of wafer scale computing has ever really worked. And so it's a really exciting time to be there, trying to figure out how we can map ML workloads to work, um, on a much, much bigger chip.Swyx: And do you use like [00:03:00] a different programming language or framework to do that? Or is that like..Abhinav: Yeah, so I mean, things have changed a bit since I was there.I think, um, you can actually run just normal tensor flow and pie torch on there. Um, so they've built a kind of software stack that compiles it down. So it actually just kind of works naturally. But yeah.Jonathan : Compiled versions of Python is a hot topic at the moment with Mojo as well. Swyx: And then Mosaic, you, you spearheaded the MPT-7B effort.INTRO TO MOSAIC [00:03:20]Abhinav: Uh, yeah. Yeah, so it's kind of like, it's been maybe six months, 12 months in the making. We kind of started working on LMs sort of back in the summer of last year. Um, and then we came with this blog post where we kind of profiled a lot of LMs and saw, hey, the cost of training is actually a lot lower than what people might think.Um, and then since then, you know, being inspired by kind of, you know, meta's release, so the LLaMA models and lots of other open source work, we kind of started working towards, well, what if we were to release a really good kind of 7 billion parameter model? And that's what MPT is. Alessio:You know, we mentioned some of the podcasts you had done, Jonathan, I think in one of them you mentioned Mosaic was not planning on building a model and releasing and obviously you eventually did. So what are some of the things that got you there that maybe obviously LLaMA you mentioned was an inspiration. You now have both the training and like inference products that you offer. Was this more of a research challenge in a way, uh, that you wanted to do?Or how did the idea come to be?Jonathan: I think there were a couple of things. So we still don't have a first class model. We're not an open AI where, you know, our businesses come to use our one great model. Our business is built around customers creating their own models. But at the end of the day, if customers are gonna create their own models, we have to have the tools to help them do that, and to have the tools to help them do that and know that they work we have to create our own models to start. We have to know that we can do something great if customers are gonna do something great. And one too many people may have challenged me on Twitter about the fact that, you know, mosaic claims all these amazing numbers, but, you know, I believe not to, you know, call out Ross Whiteman here, but, you know, I believe he said at some point, you know, show us the pudding.Um, and so Ross, you know, please let me know how the pudding tastes. But in all seriousness, like I think there is something, this is a demo in some sense. This is to say we did this in 9.5 days for a really reasonable cost, straight through 200, an intervention. 200 K. Yep. Um, you can do this too.Swyx: Uh, and just to reference the numbers that you're putting out, this is the, the last year you were making a lot of noise for trading GPT 3 under 450 K, which is your, your initial estimate.Um, and then it went down to a 100 K and stable diffusion 160 k going down to less than 50 K as well.Jonathan: So I will be careful about that 100 K number. That's certainly the challenge I've given Abhi to hit. Oh, I wouldn't make the promise that we've hit yet, but you know, it's certainly a target that we have.And I, you know, Abhi may kill me for saying this. I don't think it's crazy. TRAINING AND CREATING THE MODELS [00:05:45] Swyx: So we definitely want to get into like estimation math, right? Like what, what needs to happen for those big order magnitude changes to in, in infrastructure costs. But, uh, let's kind of stick to the MPT-7B story. Yeah. Tell us everything.Like you have, uh, three different models. One of them. State of the art essentially on context length. Let's talk about the process of training them, the, uh, the decisions that you made. Um, I can go into, you know, individual details, but I just wanna let you let you rip.Abhinav: Yeah, so I mean, I think, uh, we started off with the base model, which is kind of for all practical purposes, a recreation of LLaMA 7B.Um, so it's a 7 billion perimeter model trained on the trillion tokens. Um, and our goal was like, you know, we should do it efficiently. We should be able to do it like, kind of hands free so we don't have to babysit the runs as they're doing them. And it could be kind of a, a launching point for these fine tune models and those fine tune models, you know, on, on the one hand they're kind of really fun for the community, like the story writer model, which has like a 65,000 length context window and you can even kind of extrapolate beyond that. Um, but they're, they're also kind of just tr inspirations really. So you could kind of start with an MPT-7B base and then build your own custom, you know, downstream. If you want a long context code model, you could do that with our platform. If you wanted one that was for a particular language, you could do that too.But yeah, so we picked kind of the three variance chat and instruct and story writer just kind of like inspirations looking at what people were doing in the community today. Yeah. Alessio: And what's the beginning of the math to come up with? You know, how many tokens you wanna turn it on? How many parameters do you want in a bottle? 7 billion and 30 billion seem to be kind of like two of the magic numbers going around right now. Abhinav: Yeah, definitely. Definitely. Yeah, I think like there's sort of these scaling laws which kind of tell you how to best spend your training compute if that's all you cared about. So if you wanna spend $200,000 exactly in the most efficient way, there'd be a recipe for doing that.Um, and that we usually go by the Chinchilla laws. Now for these models, we actually didn't quite do that because we wanted to make sure that people could actually run these at home and that they [00:07:30] were good for inference. So we trained them kind of beyond those chinchilla points so that we're almost over-training them.I think there's like a joke going on online that they're like long boy and that that came up internally because we were training them for really, really long durations. So that 7B model, the chinchilla point might be 140 billion tokens. Instead, we trained a trillion, so almost seven times longer than you normally would.Swyx: So longboi was the code name. So is it, is it the trading method? Is it the scaling law that you're trying to coin or is it the code name for the 64 billion?Jonathan: Uh, 64. It was just an internal joke for the, for training on way more tokens than you would via chinchilla. Okay. Um, we can coin it long boy and it, it really stuck, but just to, you know, long boys filled with two ELs at the beginning.Yeah. Cause you know, we wanted the lLLaMA thing in there as well. Jonathan: Yeah, yeah, yeah. Our darn CEO we have to rein him in that guy, you know, you can't, yeah. I'm gonna take away his Twitter password at some point. Um, but you know, he had to let that one out publicly. And then I believe there was a YouTube video where someone happened to see it mentioned before the model came out and called it the Long G boy or something like that.Like, so you know, now it's out there in the world. It's out there. It's like Sydnee can't put it back inSwyx: There's a beautiful picture which I think Naveen tweeted out, which, um, shows a long boy on a whiteboard.Jonathan: That was the origin of Long Boy. In fact, the legs of the lLLaMA were the two Ls and the long boy.DATA CHOICES AND THE IMPORTANCE OF REPETITION [00:08:45]Swyx: Well, talk to me about your data choices, right? Like this is your passion project. Like what can you tell us about it?Jonathan: Yeah, I think Abhi wanted to kill me by the end for trying to use all the GPUs on data and none of them on actually training the model. Um, at the end of the day, We know that you need to train these models and [00:09:00] lots of data, but there are a bunch of things we don't know.Number one is what kinds of different data sources matter. The other is how much does repetition really matter? And really kind of repetition can be broken down into how much does quality versus quantity matter. Suppose I had the world's best 10 billion tokens of data. Would it be better to train on that a hundred times or better to train on a trillion tokens of low quality, fresh data?And obviously there's, there's a middle point in between. That's probably the sweet spot. But how do you even know what good quality data is? And. So, yeah, this is, nobody knows, and I think the more time I spent, we have a whole data team, so me and several other people, the more time that we spent on this, you know, I came away thinking, gosh, we know nothing.Gosh, if I were back in academia right now, I would definitely go and, you know, write a paper about this because I have no idea what's going on.Swyx: You would write a paper about it. I'm interested in such a paper. I haven't come across any that exists. Could you frame the central question of such a paper?THE CENTRAL QUESTION: WHAT MIX OF DATA SETS SHOULD YOU USE? [00:10:00]Jonathan: Yeah. The central question is what mix of data sets should you use? Okay. Actually I've, you know, you had mentioned my law school stuff. I went back to Georgetown Law where I used to teach, um, in the midst of creating this model, and I actually sat down with a class of law students and asked them, I gave them our exact data sets, our data mixes, um, like how many tokens we had, and I said, Create the best data set for your model.Knowing they knew nothing about large language models, they just know that data goes in and it's going to affect the behavior. Um, and I was like, create a mix and they basically covered all the different trade-offs. Um, you probably want a lot of English language [00:10:30] text to start with. You get that from the web, but do you want it to be multilingual?If so, you're gonna have a lot less English text. Maybe it'll be worse. Do you wanna have code in there? There are all these beliefs that code leads to models being better at logical reasoning, of which I've seen zero evidence. Rep. It's not, um, I mean, really made a great code model, but code models leading to better chain of thought reasoning on the part of language or code being in the training set leading to better chain of thought reasoning.People claim this all the time, but I've still never seen any real evidence beyond that. You know, one of the generations of the GPT three model started supposedly from Code Da Vinci. Yes. And so there's a belief that, you know, maybe that helped. But again, no evidence. You know, there's a belief that spending a lot of time on good sources like Wikipedia is good for the model.Again, no evidence. At the end of the day, we tried a bunch of different data mixes and the answer was that there are some that are better or worse than others. We did find that the pile, for example, was a really solid data mix, but you know, there were stronger data mixes by our evaluation metrics. And I'll get back to the evaluation question in a minute cuz that's a really important one.This data set called c4, which is what the original T five model was trained on, is weirdly good. And everybody, when I posted on this on Twitter, like Stella Beaterman from Luther mentioned this, I think someone else mentioned this as well. C4 does really well in the metrics and we have no idea why we de-duplicated it against our evaluation set.So it's not like it memorized the data, it is just one web scrape from 2019. If you actually look at the T five paper and see how it was pre-processed, it looks very silly. Mm-hmm. They removed anything that had the word JavaScript in it because they didn't want to get like no JavaScript [00:12:00] warnings. They removed anything with curly braces cuz they didn't wanna get JavaScript in it.They looked at this list of bad words, um, and removed anything that had those bad words. If you actually look at the list of bad words, words like gay are on that list. And so there's, you know, it is a very problematic, you know, list of words, but that was the cleaning that leads to a data set that seems to be unbeatable.So that to me says that we know nothing about data. We, in fact used a data set called mc four as well, which is they supposedly did the same pre-processing of C4 just on more web calls. The English portion is much worse than C4 for reasons that completely escape us. So in the midst of all that, Basically I set two criteria.One was I wanted to be at least as good as mc four English, like make sure that we're not making things actively worse. And mc four English is a nice step up over other stuff that's out there. And two was to go all in on diversity after that, making sure that we had some code, we had some scientific papers, we had Wikipedia, because people are gonna use this model for all sorts of different purposes.But I think the most important thing, and I'm guessing abhi had a million opinions on this, is you're only as good as your evaluation. And we don't know how to evaluate models for the kind of generation we ask them to do. So past a certain point, you have to kinda shrug and say, well, my evaluation's not even measuring what I care about.Mm-hmm. So let me just make reasonable choices. EVALUATION CHALLENGES OF LLMs [0:13:00]Swyx: So you're saying MMLU, big bench, that kind of stuff is not. Convincing for youJonathan: A lot of this stuff is you've got two kinds of tasks. Some of these are more of multiple choice style tasks where there is a right answer. Um, either you ask the model to spit out A, B, C, or D or you know, and if you're more [00:13:30] sophisticated, you look at the perplexity of each possible answer and pick the one that the model is most likely to generate.But we don't ask these models to do multiple choice questions. We ask them to do open-ended generation. There are also open-ended generation tasks like summarization. You compare using things like a blue score or a rouge score, which are known to be very bad ways of comparing text. At the end of the day, there are a lot of great summaries of a paper.There are a lot of great ways to do open form generation, and so humans are, to some extent, the gold standard. Humans are very expensive. It turns out we can't put them into our eval pipeline and just have the humans look at our model every, you know, 10 minutes? Not yet. Not yet. Maybe soon. Um, are you volunteering Abhi?Abhinav: I, I, I just know we have a great eval team who's, uh, who's helping us build new metrics. So if they're listening,Jonathan: But it's, you know, evaluation of large language models is incredibly hard and I don't think any of these metrics really truly capture. What we expect from the models in practice.Swyx: Yeah. And we might draw wrong conclusions.There's been a debate recently about the emergence phenomenon, whether or not it's a mirage, right? I don't know if you guys have opinions about that process. Abhinav: Yeah, I think I've seen like this paper and all and all, even just kind of plots from different people where like, well maybe it's just a artifact of power, like log scaling or metrics or, you know, we're meshing accuracy, which is this a very like harsh zero one thing.Yeah. Rather than kind of something more continuous. But yeah, similar to what Jonathan was saying about evals. Like there there's one issue of like you just like our diversity of eval metrics, like when we put these models up, even like the chat ones, the instruct ones, people are using 'em for such a variety of tasks.There's just almost no way we get ahead of time, like measuring individual dimensions. And then also particularly like, you know, at the 7B scale, [00:15:00] um, these models still are not super great yet at the really hard tasks, like some of the hardest tasks in MMLU and stuff. So sometimes they're barely scoring like the above kind of random chance, you know, like on really, really hard tasks.So potentially as we. You know, aim for higher and higher quality models. Some of these things will be more useful to us. But we kind of had to develop MPT 7B kind of flying a little bit blind on, on what we knew it was coming out and just going off of like, you know, a small set of common sensor reasoning tasks.And of course, you know, just comparing, you know, those metrics versus other open source models. Alessio: I think fast training in inference was like one of the goals, right? So there's always the trade off between doing the hardest thing and like. Doing all the other things quickly.Abhinav: Yeah, absolutely. Yeah, I mean, I think like, you know, even at the 7B scale, you know, uh, people are trying to run these things on CPUs at home.You know, people are trying to port these to their phones, basically prioritizing the fact that the small scale would lead to our adoption. That was like a big, um, big thing going on. Alessio: Yeah. and you mentioned, um, flash attention and faster transformer as like two of the core things. Can you maybe explain some of the benefits and maybe why other models don't use it?FLASH ATTENTION [00:16:00]Abhinav: Yeah, absolutely. So flash attention is this basically faster implementation of full attention. Um, it's like a mathematical equivalent developed by like actually some of our collaborators, uh, at Stanford. Uh, the hazy research. Hazy research, yeah, exactly.Jonathan: What is, what, what, what's the name hazy research mean?Abhinav: I actually have no idea.Swyx: I have no clue. All these labs have fun names. I always like the stories behind them.Abhinav: Yeah, absolutely. We really, really liked flash attention. We, I think, had to integrate into repo even as [00:16:30] as early as September of last year. And it really just helps, you know, with training speed and also inference speed and we kind of bake that into model architecture.And this is kind of unique amongst all the other hugging face models you see out there. So ours actually, you can toggle between normal torch attention, which will work anywhere and flash attention, which will work on GPUs right out of the box. And that way I think you get almost like a 2x speed up at training time and somewhere between like 50% to a hundred percent speed up at inference time as well.So again, this is just like, we really, really wanted people to use these and like, feel like an improvement and we, we have the team to, to help deliver that. Swyx: Another part, um, of your choices was alibi position, encodings, which people are very interested in, maybe a lot of people just, uh, to sort of take in, in coatings as, as a given.But there's actually a lot of active research and honestly, it's a lot of, um, it's very opaque as well. Like people don't know how to evaluate encodings, including position encodings, but may, may, could you explain, um, alibi and, um, your choice?Abhinav: Yeah, for sure. The alibi and uh, kind of flash attention thing all kind of goes together in interesting ways.And even with training stability too. What alibi does really is that it eliminates the need to have positional embeddings in your model. Where previously, if you're a token position one, you have a particular embedding that you add, and you can't really go beyond your max position, which usually is like about 2000.With alibies, they get rid of that. Instead, just add a bias to the attention map itself. That's kind of like this slope. And if at inference time you wanna go much, much larger, they just kind of stretch that slope out to a longer, longer number of positions. And because the slope is kind of continuous and you can interpret it, it all works out now.Now one of [00:18:00] the, the funny things we found is like with flash attention, it saved so much memory and like improved performance so much that even as early as I kind of last year, like we were profiling models with, with very long context lines up to like, you know, the 65 k that you seen in release, we just never really got around to using it cuz we didn't really know what we might use it for.And also it's very hard to train stably. So we started experimenting with alibi integration, then we suddenly found that, oh wow, stability improves dramatically and now we can actually work together with alibi in a long context lens. That's how we got to like our story writer model where we can stably train these models out to very, very long context lenses and, and use them performantly.Jonathan: Yeah.Swyx: And it's also why you don't have a firm number. Most people now have a firm number on the context line. Now you're just like, eh, 65 to 85Abhinav: Oh yeah, there's, there's a, there's a big age to be 64 K or 65 k. 65 k plus.Swyx: Just do powers of twos. So 64 isn't, you know. Jonathan: Right, right. Yeah. Yeah. But we could, I mean, technically the context length is infinite.If you give me enough memory, um, you know, we can just keep going forever. We had a debate over what number to say is the longest that we could handle. We picked 84 cakes. It's the longest I expect people to see easily in practice. But, you know, we played around for even longer than that and I don't see why we couldn't go longer.Swyx: Yeah. Um, and so for those who haven't read the blog posts, you put the Great Gatsby in there and, uh, asked it to write an epilogue, which seemed pretty impressive.Jonathan: Yeah. There are a bunch of epilogues floating around internally at Mosaic. Yeah. That wasn't my favorite. I think we all have our own favorites.Yeah. But there are a bunch of really, really good ones. There was one where, you know, it's Gatsby's funeral and then Nick starts talking to Gatsby's Ghost, and Gatsby's father shows up and, you know, then he's [00:19:30] at the police station with Tom. It was very plot heavy, like this is what comes next. And a bunch of that were just very Fitzgerald-esque, like, you know, beautiful writing.Um, but it was cool to just see that Wow, the model seemed to actually be working with. You know, all this input. Yeah, yeah. Like it's, it's exciting. You can think of a lot of things you could do with that kind of context length.FINE-TUNING FOR CREATIVITY [00:19:50]Swyx: Is there a trick to fine tuning for a creative task rather than, um, factual task?Jonathan: I don't know what that is, but probably, yeah, I think, you know, the person, um, Alex who did this, he did fine tune the model explicitly on books. The goal was to try to get a model that was really a story writer. But, you know, beyond that, I'm not entirely sure. Actually, it's a great question. Well, no, I'll ask you back.How would you measure that? Swyx: Uh, God, human feedback is the solve to all things. Um, I think there is a labeling question, right? Uh, in computer vision, we had a really, really good episode with Robo Flow on the segment. Anything model where you, you actually start human feedback on like very, I think it's something like 0.5% of the, the overall, uh, final, uh, uh, labels that you had.But then you sort augment them and then you, you fully automate them, um, which I think could be applied to text. It seems intuitive and probably people like snorkel have already raised ahead on this stuff, but I just haven't seen this applied in the language domain yet.Jonathan: It, I mean there are a lot of things that seem like they make a lot of sense in machine learning that never work and a lot of things that make zero sense that seem to work.So, you know, I've given up trying to even predict. Yeah, yeah. Until I see the data or try it, I just kind shg my shoulders and you know, you hope for the best. Bring data or else, right? Yeah, [00:21:00] exactly. Yeah, yeah, yeah.Alessio: The fine tuning of books. Books three is like one of the big data sets and there was the whole.Twitter thing about trade comments and like, you know, you know, I used to be a community moderator@agenius.com and we've run into a lot of things is, well, if you're explaining lyrics, do you have the right to redistribute the lyrics? I know you ended up changing the license on the model from a commercial use Permitted.Swyx: Yeah let's let them. I'm not sure they did. Jonathan: So we flipped it for about a couple hours. Swyx: Um, okay. Can we, can we introduce the story from the start Just for people who are under the loop. Jonathan: Yeah. So I can tell the story very simply. So, you know, the book three data set does contain a lot of books. And it is, you know, as I discovered, um, it is a data set that provokes very strong feelings from a lot of folks.Um, that was one, one guy from one person in particular, in fact. Um, and that's about it. But it turns out one person who wants a lot of attention can, you know, get enough attention that we're talking about it now. And so we had a, we had a discussion internally after that conversation and we talked about flipping the license and, you know, very late at night I thought, you know, maybe it's a good thing to do.And decided, you know, actually probably better to just, you know, Stan Pat's license is still Apache too. And one of the conversations we had was kind of, we hadn't thought about this cuz we had our heads down, but the Hollywood writer Strike took place basically the moment we released the model. Mm-hmm.Um, we were releasing a model that could do AI generated creative content. And that is one of the big sticking points during the strike. Oh, the optics are not good. So the optics aren't good and that's not what we want to convey. This is really, this is a demo of the ability to do really long sequence lengths and.Boy, you know, [00:22:30] that's, that's not timing that we appreciated. And so we talked a lot internally that night about like, oh, we've had time to read the news. We've had time to take a breath. We don't really love this. Came to the conclusion that it's better to just leave it as it is now and learn the lesson for the future.But certainly that was one of my takeaways is this stuff, you know, there's a societal context around this that it's easy to forget when you're in the trenches just trying to get the model to train. And you know, in hindsight, you know, I might've gone with a different thing than a story writer. I might've gone with, you know, coder because we seem to have no problem putting programmers out of work with these models.Swyx: Oh yeah. Please, please, you know, take away this stuff from me.OPEN SOURCE LICENSES AND ETHICAL CONSIDERATIONS [00:23:00]Jonathan: Right. You know, so it's, I think, you know, really. The copyright concerns I leave to the lawyers. Um, that's really, if I learned one thing teaching at a law school, it was that I'm not a lawyer and all this stuff is a little complicated, especially open source licenses were not designed for this kind of world.They were designed for a world of forcing people to be more open, not forcing people to be more closed. And I think, you know, that was part of the impetus here, was to try to use licenses to make things more closed. Um, which is, I think, against the grain of the open source ethos. So that struck me as a little bit strange, but I think the most important part is, you know, we wanna be thoughtful and we wanna do the right thing.And in that case, you know, I hope with all that interesting licensing fund you saw, we're trying to be really thoughtful about this and it's hard. I learned a lot from that experience. Swyx: There's also, I think, an open question of fair use, right? Is training on words of fair use because you don't have a monopoly on words, but some certain arrangements of words you do.And who is to say how much is memorization by a model versus actually learning and internalizing and then. Sometimes happening to land at the right, the [00:24:00] same result.Jonathan: And if I've learned one lesson, I'm not gonna be the person to answer that question. Right, exactly. And so my position is, you know, we will try to make this stuff open and available.Yeah. And, you know, let the community make decisions about what they are or aren't comfortable using. Um, and at the end of the day, you know, it still strikes me as a little bit weird that someone is trying to use these open source licenses to, you know, to close the ecosystem and not to make things more open.That's very much against the ethos of why these licenses were created.Swyx: So the official mosaic position, I guess is like, before you use TC MPC 7B for anything commercial, check your own lawyers now trust our lawyers, not mosaic's lawyers.Jonathan: Yeah, okay. Yeah. I'm, you know, our lawyers are not your lawyers.Exactly. And, you know, make the best decision for yourself. We've tried to be respectful of the content creators and, you know, at the end of the day, This is complicated. And this is something that is a new law. It's a new law. It's a new law that hasn't been established yet. Um, but it's a place where we're gonna continue to try to do the right thing.Um, and it's, I think, one of the commenters, you know, I really appreciated this said, you know, well, they're trying to do the right thing, but nobody knows what the right thing is to even do, you know, the, I guess the, the most right thing would've been to literally not release a model at all. But I don't think that would've been the best thing for the community either.Swyx: Cool.Well, thanks. Well handled. Uh, we had to cover it, just causeJonathan: Oh, yes, no worries. A big piece of news. It's been on my mind a lot.TRAINING STABILITY ENHANCEMENT [00:25:15]Swyx: Yeah. Yeah. Well, you've been very thoughtful about it. Okay. So a lot of these other ideas in terms of architecture, flash, attention, alibi, and the other data sets were contributions from the rest of the let's just call it open community of, of machine learning advancements. Uh, but Mosaic in [00:25:30] particular had some stability improvements to mitigate loss spikes, quote unquote, uh, which, uh, I, I took to mean, uh, your existing set of tools, uh, maybe we just co kind of covered that. I don't wanna sort of put words in your mouth, but when you say things like, uh, please enjoy my empty logbook.How much of an oversell is that? How much, you know, how much is that marketing versus how much is that reality?Abhinav: Oh yeah. That, that one's real. Yeah. It's like fully end-to-end. Um, and I think.Swyx: So maybe like what, what specific features of Mosaic malibu?Abhinav: Totally, totally. Yeah. I think I'll break it into two parts.One is like training stability, right? Knowing that your model's gonna basically get to the end of the training without loss spikes. Um, and I think, you know, at the 7B scale, you know, for some models like it ha it's not that big of a deal. As you train for longer and longer durations, we found that it's trickier and trickier to avoid these lost spikes.And so we actually spent a long time figuring out, you know, what can we do about our initialization, about our optimizers, about the architecture that basically prevents these lost spikes. And you know, even in our training run, if you zoom in, you'll see small intermittent spikes, but they recover within a few hundred steps.And so that's kind of the magical bit. Our line is one of defenses we recover from Las Vegas, like just naturally, right? Mm-hmm. Our line two defense was that we used determinism and basically really smart resumption strategies so that if something catastrophic happened, we can resume very quickly, like a few batches before.And apply some of these like, uh, interventions. So we had these kinds of preparations, like a plan B, but we didn't have to use them at all for MPT 7B training. So, that was kind of like a lucky break. And the third part of like basically getting all the way to the empty law book is having the right training infrastructure.[00:27:00]So this is basically what, like is, one of the big selling points of the platform is that when you try to train these models on hundreds of GPUs, not many people outside, you know, like deep industry research owners, but the GPUs fail like a lot. Um, I would say like almost once every thousand a 100 days.So for us on like a big 512 cluster every two days, basically the run will fail. Um, and this is either due to GPUs, like falling off the bus, like that's, that's a real error we see, or kind of networking failures or something like that. And so in those situations, what people have normally done is they'll have an on-call team that's just sitting round the clock, 24-7 on slack, once something goes wrong.And if then they'll basically like to try to inspect the cluster, take nodes out that are broken, restart it, and it's a huge pain. Like we ourselves did this for a few months. And as a result of that, because we're building such a platform, we basically step by step automated every single one of those processes.So now when a run fails, we have this automatic kind of watch talk that's watching. It'll basically stop the job. Test the nodes cord in anyone's that are broken and relaunch it. And because our software's all deterministic has fast resumption stuff, it just continues on gracefully. So within that log you can see sometimes I think maybe at like 2:00 AM or something, the run failed and within a few minutes it's back up and running and all of us are just sleeping peacefully.Jonathan: I do wanna say that was hard one. Mm-hmm. Um, certainly this is not how things were going, you know, many months ago, hardware failures we had on calls who were, you know, getting up at two in the morning to, you know, figure out which node had died for what reason, restart the job, have to cord the node. [00:28:30] Um, we were seeing catastrophic loss spikes really frequently, even at the 7B scale that we're just completely derailing runs.And so this was step by step just ratcheting our way there. As Abhi said, to the point where, Many models are training at the moment and I'm sitting here in the studio and not worrying one bit about whether the runs are gonna continue. Yeah. Swyx: I'm, I'm not so much of a data center hardware kind of guy, but isn't there existing software to do this for CPUs and like, what's different about this domain? Does this question make sense at all?Jonathan: Yeah, so when I think about, like, I think back to all the Google fault tolerance papers I read, you know, as an undergrad or grad student mm-hmm. About, you know, building distributed systems. A lot of it is that, you know, Each CPU is doing, say, an individual unit of work.You've got a database that's distributed across your cluster. You wanna make sure that one CPU failing can't, or one machine failing can't, you know, delete data. So you, you replicate it. You know, you have protocols like Paxos where you're literally, you've got state machines that are replicated with, you know, with leaders and backups and things like that.And in this case, you were performing one giant computation where you cannot afford to lose any node. If you lose a node, you lose model state. If you lose a node, you can't continue. It may be that, that in the future we actually, you know, create new versions of a lot of our distributed training libraries that do have backups and where data is replicated so that if you lose a node, you can detect what node you've lost and just continue training without having to stop the run, you know?Pull from a checkpoint. Yeah. Restart again on different hardware. But for now, we're certainly in a world where if anything dies, that's the end of the run and you have to go back and recover from it. [00:30:00]DATA READINESS & TRAINING PREPARATION [00:30:00]Abhinav: Yeah. Like I think a big part, a big word there is like synchronous data pluralism, right? So like, we're basically saying that on every step, every GP is gonna do some work.They're gonna stay in sync with each other and average their, their gradients and continue. Now that there are algorithmic techniques to get around this, like you could say, oh, if a GP dies, just forget about it. All the data that's gonna see, we'll just forget about it. We're not gonna train on it.But, we don't like to do that currently because, um, it makes us give up determinism, stuff like that. Maybe in the future, as you go to extreme scales, we'll start looking at some of those methods. But at the current time it's like, we want determinism. We wanted to have a run that we could perfectly replicate if we needed to.And it was, the goal is figure out how to run it on a big cluster without humans having to babysit it. Babysit it. Alessio: So as you mentioned, these models are kind of the starting point for a lot of your customers To start, you have a. Inference product. You have a training product. You previously had a composer product that is now kind of not rolled into, but you have like a super set of it, which is like the LLM foundry.How are you seeing that change, you know, like from the usual LOP stack and like how people train things before versus now they're starting from, you know, one of these MPT models and coming from there. Like worship teams think about as they come to you and start their journey.Jonathan: So I think there's a key distinction to make here, which is, you know, when you say starting from MPT models, you can mean two things.One is actually starting from one of our checkpoints, which I think very few of our customers are actually going to do, and one is starting from our configuration. You can look at our friends at Rep for that, where, you know, MPT was in progress when Refl [00:31:30] came to us and said, Hey, we need a 3 billion parameter model by next week on all of our data.We're like, well, here you go. This is what we're doing, and if it's good enough for us, um, hopefully it's good enough for you. And that's basically the message we wanna send to our customers. MPT is basically clearing a path all the way through where they know that they can come bring their data, they can use our training infrastructure, they can use all of our amazing orchestration and other tools that abhi just mentioned, for fault tolerance.They can use Composer, which is, you know, still at the heart of our stack. And then the l l M Foundry is really the specific model configuration. They can come in and they know that thing is gonna train well because we've already done it multiple times. Swyx: Let's dig in a little bit more on what should people have ready before they come talk to you? So data architecture, eval that they're looking, etc.Abhinav: Yeah, I, I mean, I think we'll accept customers at any kind of stage in their pipeline. You know, like I'd say science, there's archetypes of people who have built products around like some of these API companies and reach a stage or maturity level where it's like we want our own custom models now, either for the purpose of reducing cost, right?Like our inference services. Quite a bit cheaper than using APIs or because they want some kind of customization that you can't really get from the other API providers. I'd say the most important things to have before training a big model. You know, you wanna have good eval metrics, you know, some kind of score that you can track as you're training your models and scaling up, they can tell you you're progressing.And it's really funny, like a lot of times customers will be really excited about training the models, right? It's really fun to like launch shelves on hundreds of gfs, just all around. It's super fun. But then they'll be like, but wait, what are we gonna measure? Not just the training loss, right? I mean, it's gotta be more than that.[00:33:00]So eval metrics is like a, it's a good pre-req also, you know, your data, you know, either coming with your own pre-training or fine-tune data and having like a strategy to clean it or we can help clean it too. I think we're, we're building a lot of tooling around that. And I think once you have those two kinds of inputs and sort of the budget that you want, we can pretty much walk you through the rest of it, right?Like that's kind of what we do. Recently we helped build CR FM's model for biomedical language a while back. Jonathan: Um, we can. That's the center of research for foundation models. Abhi: Exactly, exactly.Jonathan: Spelling it out for people. Of course.Abhinav: No, absolutely. Yeah, yeah. No, you've done more of these than I have.Um, I think, uh, basically it's sort of, we can help you figure out what model I should train to scale up so that when I go for my big run company, your here run, it's, uh, it's predictable. You can feel confident that it's gonna work, and you'll kind of know what quality you're gonna get out before you have to spend like a few hundred thousand dollars.DYNAMIC REAL-TIME MODEL EVALUATION [00:34:00]Alessio: The rap Reza from rap was on the podcast last week and, uh, they had human eval and then that, uh, I'm Jon Eval, which is like vibe based. Jonathan: And I, I do think the vibe based eval cannot be, you know, underrated really at the, I mean, at the end of the day we, we did stop our models and do vibe checks and we did, as we monitor our models, one of our evals was we just had a bunch of prompts and we would watch the answers as the model trained and see if they changed cuz honestly, You know, I don't really believe in any of these eval metrics to capture what we care about.Mm-hmm. But when you ask it, uh, you know, I don't know. I think one of our prompts was to suggest games for a three-year-old and a seven-year-old. That would be fun to play. Like that was a lot more [00:34:30] valuable to me personally, to see how that answer evolved and changed over the course of training. So, you know, and human eval, just to clarify for folks, human human eval is an automated evaluation metric.There's no humans in it at all. There's no humans in it at all. It's really badly named. I got so confused the first time that someone brought that to me and I was like, no, we're not bringing humans in. It's like, no, it's, it's automated. They just called it a bad name and there's only a hundred cents on it or something.Abhinav: Yeah. Yeah. And, and it's for code specifically, right?Jonathan: Yeah. Yeah. It's very weird. It's a, it's a weird, confusing name that I hate, but you know, when other metrics are called hella swag, like, you know, you do it, just gotta roll with it at this point. Swyx: You're doing live evals now. So one, one of the tweets that I saw from you was that it is, uh, important that you do it paralyzed.Uh, maybe you kind of wanna explain, uh, what, what you guys did.Abhinav: Yeah, for sure. So with LLM Foundry, there's many pieces to it. There's obviously the core training piece, but there's also, you know, tools for evaluation of models. And we've kind of had one of the, I think it's like the, the fastest like evaluation framework.Um, basically it's multi GPU compatible. It runs with Composer, it can support really, really big models. So basically our framework runs so fast that even Azure models are training. We can run these metrics live during the training. So like if you have a dashboard like weights and biases, you kind of watch all these evil metrics.We have, like, 15 or 20 of them honestly, that we track during the run and add negligible overhead. So we can actually watch as our models go and feel confident. Like, it's not like we wait until the very last day to, to test if the models good or notJonathan: That's amazing. Yeah. I love that we've gotten this far into the conversation.We still haven't talked about efficiency and speed. Those are usually our two watch words at Mosaic, which is, you know, that's great. That says that we're [00:36:00] doing a lot of other cool stuff, but at the end of the day, um, you know, Cost comes first. If you can't afford it, it doesn't matter. And so, you know, getting things down cheap enough that, you know, we can monitor in real time, getting things down cheap enough that we can even do it in the first place.That's the basis for everything we do.OPEN SCIENCE FOR AFFORDABLE AI RESEARCH [00:36:00]Alessio: Do you think a lot of the questions that we have around, you know, what data sets we should use and things like that are just because training was so expensive before that, we just haven't run enough experiments to figure that out. And is that one of your goals is trying to make it cheaper so that we can actually get the answers?Jonathan: Yeah, that's a big part of my personal conviction for being here. I think I'm, I'm still in my heart, the second year grad student who was jealous of all his friends who had GPUs and he didn't, and I couldn't train any models except in my laptop. And that, I mean, the lottery ticket experiments began on my laptop that I had to beg for one K 80 so that I could run amist.And I'm still that person deep down in my heart. And I'm a believer that, you know, if we wanna do science and really understand these systems and understand how to make them work well, understand how they behave, understand what makes them safe and reliable. We need to make it cheap enough that we can actually do science, and science involves running dozens of experiments.When I finally, you know, cleaned out my g c s bucket from my PhD, I deleted a million model checkpoints. I'm not kidding. There were over a million model checkpoints. That is the kind of science we need, you know, that's just what it takes. In the same way that if you're in a biology lab, you don't just grow one cell and say like, eh, the drug seems to work on that cell.Like, there's a lot more science you have to do before you really know.Abhinav: Yeah. And I think one of the special things about Mosaic's kind of [00:37:30] position as well is that we have such, so many customers all trying to train models that basically we have the incentive to like to devote all these resources and time to do this science.Because when we learn which pieces actually work, which ones don't, we get to help many, many people, right? And so that kind of aggregation process I think is really important for us. I remember way back there was a paper about Google that basically would investigate batch sizes or something like that.And it was this paper that must have cost a few million dollars during all the experience. And it was just like, wow, what a, what a benefit to the whole community. Now, like now we all get to learn from that and we get, we get to save. We don't have to spend those millions of dollars anymore. So I think, um, kind of mosaical science, like the insights we get on, on data, on pre-screening architecture, on all these different things, um, that's why customers come to us.Swyx: Yeah, you guys did some really good stuff on PubMed, G B T as well. That's the first time I heard of you. Of you. And that's also published to the community.Abhinav: Yeah, that one was really fun. We were like, well, no one's really trained, like fully from scratch domain specific models before. Like, what if we just did a biomed one?Would it still work? And, uh, yeah, I'd be really excited. That did, um, we'll probably have some follow up soon, I think, later this summer.Jonathan: Yeah. Yes. Stay tuned on that. Um, but I, I will say just in general, it's a really important value for us to be open in some sense. We have no incentive not to be open. You know, we make our money off of helping people train better.There's no cost to us in sharing what we learn with the community. Cuz really at the end of the day, we make our money off of those custom models and great infrastructure and, and putting all the pieces together. That's honestly where the Mosaic name came from. Not off of like, oh, we've got, you know, this one cool secret trick [00:39:00] that we won't tell you, or, you know, closing up.I sometimes, you know, in the past couple weeks I've talked to my friends at places like Brain or, you know, what used to be Brain Now Google DeepMind. Oh, I R I P Brain. Yeah. R i p Brian. I spent a lot of time there and it was really a formative time for me. Um, so I miss it, but. You know, I kind of feel like we're one of the biggest open research labs left in industry, which is a very sad state of affairs because we're not very big.Um, but at least can you say how big the team is actually? Yeah. We were about 15 researchers, so we're, we're tiny compared to, you know, the huge army of researchers I remember at Brain or at fair, at Deep Mind back, you know, when I was there during their heydays. Um, you know, but everybody else is kind of, you know, closed up and isn't saying very much anymore.Yeah. And we're gonna keep talking and we're gonna keep sharing and, you know, we will try to be that vanguard to the best of our ability. We're very small and I, I can't promise we're gonna do what those labs used to do in terms of scale or quantity of research, but we will share what we learn and we will try to create resources for the community.Um, I, I dunno, I just, I believe in openness fundamentally. I'm an academic at heart and it's sad to me to watch that go away from a lot of the big labs. THE OPEN APPROACH [00:40:15]Alessio: We just had a live pod about the, you know, open AI snow mode, uh, post that came out and it was one of the first time I really dove into Laura and some of the this new technologies, like how are you thinking about what it's gonna take for like the open approach to really work?Obviously today, GPT four is still, you know, part of like that state-of-the-art model for a [00:40:30] lot of tasks. Do you think some of the innovation and kind of returning methods that we have today are enough if enough people like you guys are like running these, these research groups that are open? Or do you think we still need a step function improvement there?Jonathan: I think one important point here is the idea of coexistence. I think when you look at, I don't know who won Linux or Windows, the answer is yes. Microsoft bought GitHub and has a Windows subsystem for Linux. Linux runs a huge number of our servers and Microsoft is still a wildly profitable company.Probably the most successful tech company right now. So who won open source or closed source? Yes. Um, and I think that's a similar world that we're gonna be in here where, you know, it's gonna be different things for different purposes. I would not run Linux on my laptop personally cuz I like connecting to wifi and printing things.But I wouldn't run Windows on one of my surfers. And so I do think what we're seeing with a lot of our customers is, do they choose opening IR mosaic? Yes. There's a purpose for each of these. You have to send your data off to somebody else with open eyes models. That's a risk. GPT four is amazing and I would never promise someone that if they come to Mosaic, they're gonna get a GPT four quality model.That's way beyond our means and not what we're trying to do anyway. But there's also a whole world for, you know, domain specific models, context specific models that are really specialized, proprietary, trained on your own data that can do things that you could never do with one of these big models. You can customize in crazy ways like G B T four is not gonna hit 65 K context length for a very long time, cuz they've already trained that [00:42:00] model and you know, they haven't even released the 32 K version yet.So we can, you know, we can do things differently, you know, by being flexible. So I think the answer to all this is yes. But we can't see the open source ecosystem disappear. And that's the scariest thing for me. I hear a lot of talk in academia about, you know, whatever happened to that academic research on this field called information retrieval?Well, in 1999 it disappeared. Why? Because Google came along and who cares about information retrieval research when you know you have a Google Scale, you know, Web Scale database. So you know, there's a balance here. We need to have both. Swyx: I wanna applaud you, Elaine. We'll maybe edit it a little like crowd applause, uh, line.Cuz I, I think that, um, that is something that as a research community, as people interested in progress, we need to see these things instead of just, uh, seeing marketing papers from the advertising GPT 4.Jonathan: Yeah. I, I think I, you know, to get on my soapbox for 10 more seconds. Go ahead. When I talk to policymakers about, you know, the AI ecosystem, the usual fear that I bring up is, Innovation will slow because of lack of openness.I've been complaining about this for years and it's finally happened. Hmm. Why is Google sharing, you know, these papers? Why is Open AI sharing these papers? There are a lot of reasons. You know, I have my own beliefs, but it's not something we should take for granted that everybody's sharing the work that they do and it turns out well, I think we took it for granted for a while and now it's gone.I think it's gonna slow down the pace of progress. In a lot of cases, each of these labs has a bit of a monoculture and being able to pass ideas [00:43:30] back and forth was a lot of what kept, you know, scientific progress moving. So it's imperative not just, you know, for the open source community and for academia, but for the progress of technology.That we have a vibrant open source research community.THE FUTURE OF MOSAIC [00:44:11]Swyx: There's a preview of the ecosystem and commentary that we're, we're gonna do. But I wanna close out some stuff on Mosaic. You launched a bunch of stuff this month. A lot of stuff, uh, actually was, I was listening to you on Gradient descent, uh, and other podcasts we know and love.Uh, and you said you also said you were not gonna do inference and, and, and last week you were like, here's Mosaic ML inference. Oops. So maybe just a, at a high level, what was Mosaic ml and like, what is it growing into? Like how do you conceptualize this? Jonathan: Yeah, and I will say gradient, when graded dissent was recorded, we weren't doing inference and had no plans to do it.It took a little while for the podcast to get out. Um, in the meantime, basically, you know, one thing I've learned at a startup, and I'm sure abhi can comment on this as well, focus is the most important thing. We have done our best work when we've been focused on doing one thing really well and our worst work when we've tried to do lots of things.Yeah. So, We don't want to do inference, we don't want to have had to do inference. Um, and at the end of the day, our customers were begging us to do it because they wanted a good way to serve the models and they liked our ecosystem. And so in some sense, we got dragged into it kicking and screaming. We're very excited to have a product.We're going to put our best foot forward and make something really truly amazing. But there is, you know, that's something that we were reluctant to do. You know, our customers convinced us it would be good for our business. It's been wonderful for business and we are gonna put everything into this, but you know, back when grading dissent came out, I [00:45:00] was thinking like, or when we recorded it or focused, oh God, like focus is the most important thing.I've learned that the hard way multiple times that Mosaic, abhi can tell you like, you know, I've made a lot of mistakes on not focusing enough. Um, boy inference, that's a whole second thing, and a whole different animal from training. And at the end of the day, when we founded the company, our belief was that inference was relatively well served at that time.There were a lot of great inference companies out there. Um, training was not well served, especially efficient training. And we had something to add there. I think we've discovered that as the nature of the models have changed, the nature of what we had to add to inference changed a lot and there became an opportunity for us to contribute something.But that was not the plan. But now we do wanna be the place that people come when they wanna train these big, complex, difficult models and know that it's gonna go right the first time and they're gonna have something they can servee right away. Um, you know, really the rep example of, you know, with 10 days to go saying, Hey, can you please train that model?And, you know, three or four days later the model was trained and we were just having fun doing interesting, fine tuning work in it for the rest of the 10 days, you know. That also requires good inference. Swyx: That's true, that's true. Like, so running evals and, and fine tuning. I'm just putting my business hat on and you know, and Alessio as well, like, uh, I've actually had fights with potential co-founders about this on the primary business.Almost like being training, right? Like essentially a one-time cost.Jonathan: Who told you it was a one time cost? What, who, who told you that?Swyx: No, no, no, no. Correct me. Jonathan: Yeah. Yeah. Let me correct you in two ways. Um, as our CEO Navine would say, if he were here, when you create version 1.0 of your software, do you then fire all the engineers?Of [00:46:30] course not. You never, like, MPT has a thousand different things we wanted to do that we never got to. So, you know, there will be future models.Abhinav: And, and the data that's been trained on is also changing over time too, right? If you wanna ask anything about, I guess like May of 2023, we'll have to retrain it further and so on.Right? And I think this is especially true for customers who run like the kind of things that need to be up to date on world knowledge. So I, I think like, you know, the other thing I would say too is that, The malls we have today are certainly not the best malls we'll ever produce. Right. They're gonna get smaller, they're gonna get faster, they're gonna get cheaper, they're gonna get lower latency, they're gonna get higher quality.Right? And so you always want the next gen version of MPT and the one after that and one after that. There's a reason that even the GPT series goes three, four, and we know there's gonna be a five. Right? Um, so I I I also don't see as a, as a one-time cost.Jonathan: Yeah. Yeah. And I, if you wanna cite a stat on this, there are very, very
Konuğum Türk Gastroenteroloji Derneği Başkanı Prof. Dr. Dilek Oğuz.Gastrit, ülser, hazımsızlık, gaz, reflü, şişkinlik; pek çok insan bu rahatsızlıklardan muzdarip ama genel geçer bilgilere sahip...Dilek Hanım ile gastrit üzerine bir söyleşi yapmış, sade ve anlaşılır anlatımına, bilgisine hayranlık duymuştum. Deprem öncesi söyleştik. Yayını haftalar sonra dinlerken şu anda deprem bölgesinde yaşananlarla, yakın gelecekte mide rahatsızlıklarının da ne kadar yaygın olacağını düşündüm. Umarım çok kişi için bilgilendirici ve faydalı bir yayın olur.Yüksek İhtisas Üniversitesi Tıp Fakültesi Gastroenteroloji Bilim Dalı Öğretim Üyesi,Güven Hastanesi Gastroenteroloji Bölüm Başkanı Dilek Hanım ile ana hatlarıyla şu konulardan söz açtık.Gastrit, ülser, hazımsızlık, gaz, reflü, şişkinlik; nedir, nasıl tedavi edilirler?Helikobakter pilori nedir? Hangi şartlarda kendini gösterir?Stresin mide ve bağırsağa etkisi nedir?Hangi yaş ve durumlarda sindirim sistemi rahatsızlıkları beklenir?Yiyecek-içecek günlüğü tutmanın faydaları nelerdir?Herkese uygun bir sindirim sistemi diyeti var mı? Reflü gibi rahatsızlıkların artış nedeni nelerdir?Nasıl iyi bir doktor ve eğitimci olunur?
O Değil De'nin 7. bölümünde Boğaç Soydemir, Berk Sevgi'yi konuk ediyor. İkili, alışkın oldukları üzere dopdolu bir sohbet gerçekleştirirken ünlülerden özlü sözleri, Instagram paylaşımı yapmanın ip uçlarını, konuğumuzun oldukça gerçekçi Singapur yolculuğunu ve Lil Reflü ismiyle sürdürdüğü trap kariyerini, tanışma hikayelerini, Soğuk Savaş'ı ve çok daha fazlasını konuşuyor.
Bienvenue dans le Calendrier de l'Avent des Chroniques de Motor City. Pendant tout le mois de Décembre, ce sont les auditeurs du podcast qui viennent nous raconter pourquoi ils aiment cette franchise des Pistons, qu'ils soient fans de longue date ou qu'ils aient juste une affection particulière pour cette équipe. Pour ce 7eme jour du Calendrier, vous allez entendre David, qui s'est rapproché de la franchise assez récemment lors du trade de Blake Griffin. Depuis David fait complètement parti de la communauté Pistons en France, surtout qu'il a découvert une franchise qui lui ressemble beaucoup et à qui il est désormais fidèle Les Chroniques de Motor City, c'est votre podcast dédié à l'Histoire et à la Culture des Detroit Pistons. Ensemble, nous voyageons dans le temps pour découvrir ou re-découvrir les moments qui ont compté dans la vie de la Franchise. Podcast humblement piloté par @Motor_City_Pod
Entrevue avec India Desjardins, écrivaine québécoise : le film « 23 décembre » qui sera à l'affiche le 25 novembre demain. On parle de la toxicité qui s'est immiscée dans l'émission Occupation double.Pour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
La Tunisie possède 1150 km de littoral, il y a forcément beaucoup de plages. Mais la diversité des côtes tunisiennes n'a d'égal que la richesse de l'histoire du pays. Les Phéniciens, les Romains, les Andalous, les Italiens...ont laissées des traces de leurs passages. Dans des odeurs, de jasmin, de harissa et le tchoutchouka, on va vous faire découvrir le vrai bon couscous avec le chef Nordine Labiadh, chef qui considère que l'on fait un couscous comme on s'habille. Ecoutez RTL vous régale avec Jean-Michel Zecca, Jean-Sébastien Petitdemange et Louise Petitrenaud du 18 juillet 2022
Trente ans après la sortie du premier film, "Top Gun" revient pour un deuxième volet, "Maverick", qui sort en salle ce mercredi 25 mai. Tom Cruise, le héros de la saga, renouvelle l'expérience et se fond une nouvelle fois dans la peau d'un pilote. Depuis sa création, "Top Gun" fascine et nourrit bien des fantasmes. Entre scènes impressionnantes et vie rythmée par l'adrénaline, chacun peut se demander à quel point la représentation du pilote de chasse dans le long-métrage est fidèle à la réalité. "Focus" est un podcast d'actualité quotidien. Du lundi au vendredi, Focus prend un peu de temps, un peu de champ, pour mieux comprendre ce qui se passe autour de nous, mieux comprendre notre époque, grâce aux reporters, correspondants et experts de RTL.
A new MP3 sermon from Eglise Baptiste : St-Denis, Grand Roissy is now available on SermonAudio with the following details: Title: La pureté du corps reflétée par nos choix vestimentaires Subtitle: Modestie chrétienne Speaker: Timothy Bixby Broadcaster: Eglise Baptiste : St-Denis, Grand Roissy Event: Sunday School Date: 5/22/2022 Bible: 1 Timothy 2:9-10; 1 Peter 3:3-4 Length: 37 min.
Soutenez So Sweet Planet, podcast indépendant, et accédez à des contenus exclusifs :https://www.patreon.com/sosweetplanetDans cet épisode de So Sweet Planet, j'accueille Marc Epstein, journaliste franco-britannique, qui a été chef du service "Monde" de L'Express auteur de plusieurs livres dont "Ils ont assassiné Massoud "(Robert Laffont), avec Jean-Marie Pontaut et de "Cachemire, le paradis oublié" (Chêne), avec Marie Dorigny. Marc Epstein est aussi président de La Chance, une formidable association qui agit de façon très concrète pour la diversité dans les médias. Pour que les médias reflètent mieux la diversité de la société, il faut déjà que plus d'étudiants issus des diversités puissent accéder aux écoles de journalisme. Depuis 2007, La Chance aide des étudiants boursiers à préparer les concours des écoles de journalisme. Et il ne s'agit pas que de "la" diversité mais "des" diversités, avec par exemples des jeunes qui arrivent du monde rural ou en situation de handicap. 350 journalistes bénévoles accompagnent plus de 80 bénéficiaires dans toute la France et 73% des anciennes et anciens bénéficiaires sont devenu.e.s journalistes. Une action dont les bénéfices ne vont pas qu'aux étudiants : une société mieux représentée dans ses médias se porte mieux !La Chance c'est aussi tout un programme d'éducation aux médias avec des journalistes qui vont par exemple échanger avec des jeunes dans les écoles pour les aider à décrypter l'information.Un vrai gros travail de fond. Passionnant !Le site internet de La ChanceUne interview réalisée par © Anne Greffe - Tous droits réservésSo Sweet Planet, un site et un podcast indépendants.Soutenez So Sweet Planet et accédez à vos contenus exclusifs :https://www.patreon.com/sosweetplanet Voir Acast.com/privacy pour les informations sur la vie privée et l'opt-out.
7 % aux États-Unis, 5 % dans la zone euro : l'inflation a atteint en 2021 des niveaux inédits depuis plusieurs années. Les politiques accommodantes des banques centrales y sont-elles pour quelque chose ? Comment est calculé l'indice des prix et quelles sont ses limites ?Jacques Sapir et Clément Ollivier reçoivent Florence Jany-Catrice, économiste, professeur à l'université de Lille et autrice de « L'Indice des prix à la consommation » (La Découverte, 2019).